Feb 13 20:43:28.084922 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:43:28.084978 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:43:28.084993 kernel: BIOS-provided physical RAM map: Feb 13 20:43:28.085004 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 20:43:28.085015 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 20:43:28.085025 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 20:43:28.085038 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Feb 13 20:43:28.085053 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Feb 13 20:43:28.085064 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 20:43:28.085076 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 20:43:28.085087 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 20:43:28.085098 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 20:43:28.085109 kernel: printk: bootconsole [earlyser0] enabled Feb 13 20:43:28.085121 kernel: NX (Execute Disable) protection: active Feb 13 20:43:28.085137 kernel: APIC: Static calls initialized Feb 13 20:43:28.085150 kernel: efi: EFI v2.7 by Microsoft Feb 13 20:43:28.085163 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Feb 13 20:43:28.085175 kernel: SMBIOS 3.1.0 present. Feb 13 20:43:28.085188 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 20:43:28.085201 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 20:43:28.085213 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 20:43:28.085225 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 20:43:28.085238 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 20:43:28.085250 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 20:43:28.085265 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 20:43:28.085277 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 20:43:28.085290 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 20:43:28.085304 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 20:43:28.085317 kernel: tsc: Detected 2593.907 MHz processor Feb 13 20:43:28.085329 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:43:28.085342 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:43:28.085355 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 20:43:28.085368 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 20:43:28.085384 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:43:28.085396 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 20:43:28.085409 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 20:43:28.085421 kernel: Using GB pages for direct mapping Feb 13 20:43:28.085434 kernel: Secure boot disabled Feb 13 20:43:28.085446 kernel: ACPI: Early table checksum verification disabled Feb 13 20:43:28.085460 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 20:43:28.085478 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085494 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085508 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 20:43:28.085521 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 20:43:28.085535 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085548 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085562 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085578 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085592 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085606 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085619 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085633 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 20:43:28.085646 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 20:43:28.085660 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 20:43:28.085674 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 20:43:28.085690 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 20:43:28.085704 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 20:43:28.085717 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 20:43:28.085731 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 20:43:28.085744 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 20:43:28.085758 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 20:43:28.085771 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:43:28.085784 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:43:28.085798 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 20:43:28.085814 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 20:43:28.085828 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 20:43:28.085841 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 20:43:28.085855 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 20:43:28.085868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 20:43:28.085882 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 20:43:28.085896 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 20:43:28.085927 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 20:43:28.085942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 20:43:28.085958 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 20:43:28.085972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 20:43:28.085986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 20:43:28.085999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 20:43:28.086013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 20:43:28.086026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 20:43:28.086040 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 20:43:28.086054 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 13 20:43:28.086067 kernel: Zone ranges: Feb 13 20:43:28.086083 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:43:28.086097 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:43:28.086110 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 20:43:28.086123 kernel: Movable zone start for each node Feb 13 20:43:28.086136 kernel: Early memory node ranges Feb 13 20:43:28.086150 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 20:43:28.086164 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 20:43:28.086177 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 20:43:28.086190 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 20:43:28.086206 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 20:43:28.086220 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:43:28.086233 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 20:43:28.086247 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 20:43:28.086260 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 20:43:28.086274 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 20:43:28.086288 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:43:28.086301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:43:28.086315 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:43:28.086330 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 20:43:28.086344 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:43:28.086358 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 20:43:28.086371 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 20:43:28.086385 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:43:28.086398 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:43:28.086412 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:43:28.086426 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:43:28.086439 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:43:28.086455 kernel: Hyper-V: PV spinlocks enabled Feb 13 20:43:28.086468 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:43:28.086483 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:43:28.086497 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:43:28.086510 kernel: random: crng init done Feb 13 20:43:28.086524 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 20:43:28.086537 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:43:28.086551 kernel: Fallback order for Node 0: 0 Feb 13 20:43:28.086567 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 20:43:28.086591 kernel: Policy zone: Normal Feb 13 20:43:28.086605 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:43:28.086622 kernel: software IO TLB: area num 2. Feb 13 20:43:28.086637 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 310128K reserved, 0K cma-reserved) Feb 13 20:43:28.086651 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:43:28.086665 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:43:28.086680 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:43:28.086694 kernel: Dynamic Preempt: voluntary Feb 13 20:43:28.086708 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:43:28.086724 kernel: rcu: RCU event tracing is enabled. Feb 13 20:43:28.086741 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:43:28.086756 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:43:28.086770 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:43:28.086785 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:43:28.086799 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:43:28.086815 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:43:28.086828 kernel: Using NULL legacy PIC Feb 13 20:43:28.086853 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 20:43:28.086879 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:43:28.086912 kernel: Console: colour dummy device 80x25 Feb 13 20:43:28.086926 kernel: printk: console [tty1] enabled Feb 13 20:43:28.086939 kernel: printk: console [ttyS0] enabled Feb 13 20:43:28.086952 kernel: printk: bootconsole [earlyser0] disabled Feb 13 20:43:28.086965 kernel: ACPI: Core revision 20230628 Feb 13 20:43:28.086980 kernel: Failed to register legacy timer interrupt Feb 13 20:43:28.086997 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:43:28.087010 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 20:43:28.087021 kernel: Hyper-V: Using IPI hypercalls Feb 13 20:43:28.087035 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 20:43:28.087050 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 20:43:28.087064 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 20:43:28.087077 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 20:43:28.087090 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 20:43:28.087104 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 20:43:28.087121 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Feb 13 20:43:28.087138 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 20:43:28.087156 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 20:43:28.087168 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:43:28.087181 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:43:28.087198 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:43:28.087212 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:43:28.087226 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 20:43:28.087238 kernel: RETBleed: Vulnerable Feb 13 20:43:28.087255 kernel: Speculative Store Bypass: Vulnerable Feb 13 20:43:28.087268 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:43:28.087281 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:43:28.087295 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:43:28.087310 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:43:28.087322 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:43:28.087337 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 20:43:28.087352 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 20:43:28.087367 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 20:43:28.087382 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:43:28.087397 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 20:43:28.087415 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 20:43:28.087429 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 20:43:28.087444 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 20:43:28.087459 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:43:28.087474 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:43:28.087489 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:43:28.087504 kernel: landlock: Up and running. Feb 13 20:43:28.087519 kernel: SELinux: Initializing. Feb 13 20:43:28.087534 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:43:28.087549 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:43:28.087564 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 20:43:28.087579 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:43:28.087597 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:43:28.087613 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:43:28.087628 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 20:43:28.087643 kernel: signal: max sigframe size: 3632 Feb 13 20:43:28.087659 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:43:28.087674 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:43:28.087688 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:43:28.087701 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:43:28.087716 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:43:28.087734 kernel: .... node #0, CPUs: #1 Feb 13 20:43:28.087749 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 20:43:28.087765 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:43:28.087780 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:43:28.087795 kernel: smpboot: Max logical packages: 1 Feb 13 20:43:28.087810 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 13 20:43:28.087824 kernel: devtmpfs: initialized Feb 13 20:43:28.087839 kernel: x86/mm: Memory block size: 128MB Feb 13 20:43:28.087857 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 20:43:28.087870 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:43:28.087884 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:43:28.087898 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:43:28.088012 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:43:28.088022 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:43:28.088030 kernel: audit: type=2000 audit(1739479406.027:1): state=initialized audit_enabled=0 res=1 Feb 13 20:43:28.088038 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:43:28.088046 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:43:28.088058 kernel: cpuidle: using governor menu Feb 13 20:43:28.088066 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:43:28.088074 kernel: dca service started, version 1.12.1 Feb 13 20:43:28.088082 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 20:43:28.088093 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:43:28.088102 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:43:28.088113 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:43:28.088123 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:43:28.088133 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:43:28.088145 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:43:28.088156 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:43:28.088167 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:43:28.088177 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:43:28.088186 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:43:28.088197 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:43:28.088205 kernel: ACPI: Interpreter enabled Feb 13 20:43:28.088216 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:43:28.088224 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:43:28.088236 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:43:28.088244 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 20:43:28.088252 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 20:43:28.088261 kernel: iommu: Default domain type: Translated Feb 13 20:43:28.088272 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:43:28.088280 kernel: efivars: Registered efivars operations Feb 13 20:43:28.088291 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:43:28.088299 kernel: PCI: System does not support PCI Feb 13 20:43:28.088308 kernel: vgaarb: loaded Feb 13 20:43:28.088320 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 20:43:28.088328 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:43:28.088339 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:43:28.088347 kernel: pnp: PnP ACPI init Feb 13 20:43:28.088358 kernel: pnp: PnP ACPI: found 3 devices Feb 13 20:43:28.088367 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:43:28.088377 kernel: NET: Registered PF_INET protocol family Feb 13 20:43:28.088386 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:43:28.088395 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 20:43:28.088407 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:43:28.088416 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:43:28.088427 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:43:28.088435 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 20:43:28.088446 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:43:28.088454 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:43:28.088464 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:43:28.088473 kernel: NET: Registered PF_XDP protocol family Feb 13 20:43:28.088481 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:43:28.088500 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:43:28.088512 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Feb 13 20:43:28.088523 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:43:28.088532 kernel: Initialise system trusted keyrings Feb 13 20:43:28.088542 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 20:43:28.088552 kernel: Key type asymmetric registered Feb 13 20:43:28.088562 kernel: Asymmetric key parser 'x509' registered Feb 13 20:43:28.088570 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:43:28.088580 kernel: io scheduler mq-deadline registered Feb 13 20:43:28.088591 kernel: io scheduler kyber registered Feb 13 20:43:28.088601 kernel: io scheduler bfq registered Feb 13 20:43:28.088610 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:43:28.088618 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:43:28.088629 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:43:28.088638 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:43:28.088649 kernel: i8042: PNP: No PS/2 controller found. Feb 13 20:43:28.088794 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 20:43:28.088975 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T20:43:27 UTC (1739479407) Feb 13 20:43:28.089070 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 20:43:28.089083 kernel: intel_pstate: CPU model not supported Feb 13 20:43:28.089092 kernel: efifb: probing for efifb Feb 13 20:43:28.089101 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 20:43:28.089109 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 20:43:28.089117 kernel: efifb: scrolling: redraw Feb 13 20:43:28.089125 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 20:43:28.089144 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:43:28.089154 kernel: fb0: EFI VGA frame buffer device Feb 13 20:43:28.089162 kernel: pstore: Using crash dump compression: deflate Feb 13 20:43:28.089170 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 20:43:28.089178 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:43:28.089186 kernel: Segment Routing with IPv6 Feb 13 20:43:28.089201 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:43:28.089209 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:43:28.089218 kernel: Key type dns_resolver registered Feb 13 20:43:28.089231 kernel: IPI shorthand broadcast: enabled Feb 13 20:43:28.089241 kernel: sched_clock: Marking stable (793004100, 42132700)->(1027837300, -192700500) Feb 13 20:43:28.089250 kernel: registered taskstats version 1 Feb 13 20:43:28.089258 kernel: Loading compiled-in X.509 certificates Feb 13 20:43:28.089270 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:43:28.089278 kernel: Key type .fscrypt registered Feb 13 20:43:28.089288 kernel: Key type fscrypt-provisioning registered Feb 13 20:43:28.089297 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:43:28.089306 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:43:28.089318 kernel: ima: No architecture policies found Feb 13 20:43:28.089327 kernel: clk: Disabling unused clocks Feb 13 20:43:28.089338 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:43:28.089346 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:43:28.089357 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:43:28.089365 kernel: Run /init as init process Feb 13 20:43:28.089376 kernel: with arguments: Feb 13 20:43:28.089384 kernel: /init Feb 13 20:43:28.089395 kernel: with environment: Feb 13 20:43:28.089405 kernel: HOME=/ Feb 13 20:43:28.089415 kernel: TERM=linux Feb 13 20:43:28.089424 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:43:28.089435 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:43:28.089455 systemd[1]: Detected virtualization microsoft. Feb 13 20:43:28.089466 systemd[1]: Detected architecture x86-64. Feb 13 20:43:28.089477 systemd[1]: Running in initrd. Feb 13 20:43:28.089488 systemd[1]: No hostname configured, using default hostname. Feb 13 20:43:28.089501 systemd[1]: Hostname set to . Feb 13 20:43:28.089512 systemd[1]: Initializing machine ID from random generator. Feb 13 20:43:28.089521 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:43:28.089531 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:43:28.089541 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:43:28.089551 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:43:28.089562 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:43:28.089571 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:43:28.089584 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:43:28.089595 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:43:28.089606 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:43:28.089615 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:43:28.089626 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:43:28.089635 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:43:28.089647 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:43:28.089658 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:43:28.089669 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:43:28.089678 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:43:28.089690 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:43:28.089698 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:43:28.089709 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:43:28.089719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:43:28.089728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:43:28.089741 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:43:28.089750 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:43:28.089761 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:43:28.089770 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:43:28.089781 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:43:28.089790 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:43:28.089802 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:43:28.089810 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:43:28.089821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:28.089859 systemd-journald[176]: Collecting audit messages is disabled. Feb 13 20:43:28.089884 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:43:28.089895 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:43:28.089917 systemd-journald[176]: Journal started Feb 13 20:43:28.089944 systemd-journald[176]: Runtime Journal (/run/log/journal/353e4430d8b5446a9e603068e2761848) is 8.0M, max 158.8M, 150.8M free. Feb 13 20:43:28.087180 systemd-modules-load[177]: Inserted module 'overlay' Feb 13 20:43:28.098057 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:43:28.098683 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:43:28.111211 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:43:28.120623 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:43:28.125355 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:28.142304 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:43:28.169001 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:43:28.169029 kernel: Bridge firewalling registered Feb 13 20:43:28.154764 systemd-modules-load[177]: Inserted module 'br_netfilter' Feb 13 20:43:28.156404 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:43:28.164421 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:43:28.182239 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:43:28.186058 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:43:28.186445 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:43:28.206040 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:43:28.213121 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:28.220695 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:43:28.228065 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:43:28.237087 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:43:28.242612 dracut-cmdline[211]: dracut-dracut-053 Feb 13 20:43:28.245986 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:43:28.303596 systemd-resolved[215]: Positive Trust Anchors: Feb 13 20:43:28.305935 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:43:28.305994 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:43:28.331332 systemd-resolved[215]: Defaulting to hostname 'linux'. Feb 13 20:43:28.340199 kernel: SCSI subsystem initialized Feb 13 20:43:28.332615 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:43:28.343175 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:43:28.351927 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:43:28.362930 kernel: iscsi: registered transport (tcp) Feb 13 20:43:28.384887 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:43:28.384998 kernel: QLogic iSCSI HBA Driver Feb 13 20:43:28.420166 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:43:28.430033 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:43:28.458542 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:43:28.458626 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:43:28.461646 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:43:28.501944 kernel: raid6: avx512x4 gen() 18345 MB/s Feb 13 20:43:28.520922 kernel: raid6: avx512x2 gen() 18421 MB/s Feb 13 20:43:28.539924 kernel: raid6: avx512x1 gen() 18452 MB/s Feb 13 20:43:28.559923 kernel: raid6: avx2x4 gen() 18332 MB/s Feb 13 20:43:28.578919 kernel: raid6: avx2x2 gen() 18300 MB/s Feb 13 20:43:28.598776 kernel: raid6: avx2x1 gen() 14040 MB/s Feb 13 20:43:28.598834 kernel: raid6: using algorithm avx512x1 gen() 18452 MB/s Feb 13 20:43:28.620304 kernel: raid6: .... xor() 26888 MB/s, rmw enabled Feb 13 20:43:28.620344 kernel: raid6: using avx512x2 recovery algorithm Feb 13 20:43:28.641933 kernel: xor: automatically using best checksumming function avx Feb 13 20:43:28.792940 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:43:28.802354 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:43:28.812040 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:43:28.823630 systemd-udevd[395]: Using default interface naming scheme 'v255'. Feb 13 20:43:28.828041 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:43:28.848066 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:43:28.863718 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 20:43:28.889185 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:43:28.899421 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:43:28.940016 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:43:28.954154 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:43:28.982202 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:43:28.990990 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:43:28.997713 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:43:29.003386 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:43:29.012101 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:43:29.026927 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:43:29.039449 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:43:29.055073 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:43:29.059176 kernel: AES CTR mode by8 optimization enabled Feb 13 20:43:29.060761 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:43:29.061782 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:29.069637 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:43:29.077266 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 20:43:29.077519 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:43:29.077777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:29.080654 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:29.098187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:29.108694 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 20:43:29.108739 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 20:43:29.115613 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:43:29.121333 kernel: PTP clock support registered Feb 13 20:43:29.121104 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:29.132564 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 20:43:29.132603 kernel: hv_vmbus: registering driver hv_utils Feb 13 20:43:29.136965 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 20:43:29.137002 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 20:43:29.139237 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 20:43:29.917317 systemd-resolved[215]: Clock change detected. Flushing caches. Feb 13 20:43:29.921272 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:29.925332 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 20:43:29.935987 kernel: scsi host0: storvsc_host_t Feb 13 20:43:29.936200 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 20:43:29.936233 kernel: scsi host1: storvsc_host_t Feb 13 20:43:29.942238 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 20:43:29.944969 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 20:43:29.945001 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:43:29.951651 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 20:43:29.951684 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 20:43:29.968254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:29.980175 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 20:43:29.980694 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:43:29.996361 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 20:43:29.996413 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 20:43:30.011267 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 20:43:30.012298 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:43:30.012316 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 20:43:30.011194 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:30.036106 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 20:43:30.049027 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 20:43:30.049226 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 20:43:30.049391 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 20:43:30.049558 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 20:43:30.049724 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:30.049744 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 20:43:30.065996 kernel: hv_netvsc 7c1e5221-73cb-7c1e-5221-73cb7c1e5221 eth0: VF slot 1 added Feb 13 20:43:30.076416 kernel: hv_vmbus: registering driver hv_pci Feb 13 20:43:30.076463 kernel: hv_pci d7a6c328-2b43-457c-9840-37d87c153558: PCI VMBus probing: Using version 0x10004 Feb 13 20:43:30.122759 kernel: hv_pci d7a6c328-2b43-457c-9840-37d87c153558: PCI host bridge to bus 2b43:00 Feb 13 20:43:30.122945 kernel: pci_bus 2b43:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 20:43:30.123142 kernel: pci_bus 2b43:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 20:43:30.123304 kernel: pci 2b43:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 20:43:30.123492 kernel: pci 2b43:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 20:43:30.123670 kernel: pci 2b43:00:02.0: enabling Extended Tags Feb 13 20:43:30.123839 kernel: pci 2b43:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2b43:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 20:43:30.124048 kernel: pci_bus 2b43:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 20:43:30.124226 kernel: pci 2b43:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 20:43:30.294507 kernel: mlx5_core 2b43:00:02.0: enabling device (0000 -> 0002) Feb 13 20:43:30.518708 kernel: mlx5_core 2b43:00:02.0: firmware version: 14.30.5000 Feb 13 20:43:30.518879 kernel: hv_netvsc 7c1e5221-73cb-7c1e-5221-73cb7c1e5221 eth0: VF registering: eth1 Feb 13 20:43:30.519019 kernel: mlx5_core 2b43:00:02.0 eth1: joined to eth0 Feb 13 20:43:30.519156 kernel: mlx5_core 2b43:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 20:43:30.517601 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 20:43:30.529002 kernel: mlx5_core 2b43:00:02.0 enP11075s1: renamed from eth1 Feb 13 20:43:30.590192 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (448) Feb 13 20:43:30.606111 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 20:43:30.627457 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (452) Feb 13 20:43:30.627285 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 20:43:30.642786 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 20:43:30.650365 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 20:43:30.671094 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:43:30.683013 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:30.689974 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:31.698018 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:31.698785 disk-uuid[602]: The operation has completed successfully. Feb 13 20:43:31.795438 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:43:31.795551 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:43:31.816131 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:43:31.821919 sh[688]: Success Feb 13 20:43:31.852533 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:43:32.027309 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:43:32.037076 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:43:32.043748 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:43:32.056984 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:43:32.057019 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:43:32.062658 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:43:32.065562 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:43:32.067873 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:43:32.408847 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:43:32.414584 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:43:32.425330 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:43:32.431520 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:43:32.444912 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:43:32.444951 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:43:32.444984 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:43:32.464432 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:43:32.473416 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:43:32.479317 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:43:32.484287 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:43:32.497167 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:43:32.532816 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:43:32.543115 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:43:32.562006 systemd-networkd[872]: lo: Link UP Feb 13 20:43:32.562015 systemd-networkd[872]: lo: Gained carrier Feb 13 20:43:32.564112 systemd-networkd[872]: Enumeration completed Feb 13 20:43:32.564356 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:43:32.567117 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:43:32.567121 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:43:32.568689 systemd[1]: Reached target network.target - Network. Feb 13 20:43:32.624978 kernel: mlx5_core 2b43:00:02.0 enP11075s1: Link up Feb 13 20:43:32.658686 kernel: hv_netvsc 7c1e5221-73cb-7c1e-5221-73cb7c1e5221 eth0: Data path switched to VF: enP11075s1 Feb 13 20:43:32.658159 systemd-networkd[872]: enP11075s1: Link UP Feb 13 20:43:32.658286 systemd-networkd[872]: eth0: Link UP Feb 13 20:43:32.658469 systemd-networkd[872]: eth0: Gained carrier Feb 13 20:43:32.658482 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:43:32.670192 systemd-networkd[872]: enP11075s1: Gained carrier Feb 13 20:43:32.693016 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 20:43:33.232070 ignition[804]: Ignition 2.19.0 Feb 13 20:43:33.232082 ignition[804]: Stage: fetch-offline Feb 13 20:43:33.233621 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:43:33.232138 ignition[804]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:33.232149 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:33.232265 ignition[804]: parsed url from cmdline: "" Feb 13 20:43:33.232270 ignition[804]: no config URL provided Feb 13 20:43:33.232278 ignition[804]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:43:33.232288 ignition[804]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:43:33.232295 ignition[804]: failed to fetch config: resource requires networking Feb 13 20:43:33.232516 ignition[804]: Ignition finished successfully Feb 13 20:43:33.261136 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:43:33.276327 ignition[882]: Ignition 2.19.0 Feb 13 20:43:33.276338 ignition[882]: Stage: fetch Feb 13 20:43:33.276545 ignition[882]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:33.276559 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:33.276646 ignition[882]: parsed url from cmdline: "" Feb 13 20:43:33.276649 ignition[882]: no config URL provided Feb 13 20:43:33.276653 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:43:33.276660 ignition[882]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:43:33.276680 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 20:43:33.360488 ignition[882]: GET result: OK Feb 13 20:43:33.360612 ignition[882]: config has been read from IMDS userdata Feb 13 20:43:33.360652 ignition[882]: parsing config with SHA512: 91288b374a6cef313454edcfe0567a69e869e1809e1f5cd6c3c5b22f684d11e90513327dbbd729b7278a3f424f92e9abcc54fa8b0e6c98e484cef5067ed06a68 Feb 13 20:43:33.366683 unknown[882]: fetched base config from "system" Feb 13 20:43:33.366698 unknown[882]: fetched base config from "system" Feb 13 20:43:33.367199 ignition[882]: fetch: fetch complete Feb 13 20:43:33.366707 unknown[882]: fetched user config from "azure" Feb 13 20:43:33.367205 ignition[882]: fetch: fetch passed Feb 13 20:43:33.368885 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:43:33.367255 ignition[882]: Ignition finished successfully Feb 13 20:43:33.379173 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:43:33.394752 ignition[888]: Ignition 2.19.0 Feb 13 20:43:33.394765 ignition[888]: Stage: kargs Feb 13 20:43:33.395001 ignition[888]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:33.395015 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:33.395894 ignition[888]: kargs: kargs passed Feb 13 20:43:33.395937 ignition[888]: Ignition finished successfully Feb 13 20:43:33.406794 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:43:33.416302 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:43:33.432390 ignition[894]: Ignition 2.19.0 Feb 13 20:43:33.432405 ignition[894]: Stage: disks Feb 13 20:43:33.434578 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:43:33.432638 ignition[894]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:33.438788 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:43:33.432652 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:33.433535 ignition[894]: disks: disks passed Feb 13 20:43:33.433578 ignition[894]: Ignition finished successfully Feb 13 20:43:33.453046 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:43:33.459101 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:43:33.461809 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:43:33.466610 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:43:33.480121 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:43:33.531028 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 20:43:33.535869 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:43:33.549067 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:43:33.640980 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:43:33.641107 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:43:33.641804 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:43:33.672067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:43:33.677290 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:43:33.685167 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:43:33.698813 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (913) Feb 13 20:43:33.698853 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:43:33.698874 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:43:33.700758 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:43:33.701605 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:43:33.708990 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:43:33.702141 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:43:33.716209 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:43:33.718418 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:43:33.731114 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:43:34.319354 coreos-metadata[915]: Feb 13 20:43:34.319 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 20:43:34.323417 coreos-metadata[915]: Feb 13 20:43:34.322 INFO Fetch successful Feb 13 20:43:34.323417 coreos-metadata[915]: Feb 13 20:43:34.322 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 20:43:34.333218 coreos-metadata[915]: Feb 13 20:43:34.333 INFO Fetch successful Feb 13 20:43:34.349136 coreos-metadata[915]: Feb 13 20:43:34.349 INFO wrote hostname ci-4081.3.1-a-d679334e6e to /sysroot/etc/hostname Feb 13 20:43:34.351491 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:43:34.448319 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:43:34.469131 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:43:34.483324 systemd-networkd[872]: enP11075s1: Gained IPv6LL Feb 13 20:43:34.487039 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:43:34.505424 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:43:34.547219 systemd-networkd[872]: eth0: Gained IPv6LL Feb 13 20:43:35.304361 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:43:35.313121 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:43:35.321064 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:43:35.329807 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:43:35.330622 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:43:35.365267 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:43:35.368144 ignition[1031]: INFO : Ignition 2.19.0 Feb 13 20:43:35.368144 ignition[1031]: INFO : Stage: mount Feb 13 20:43:35.368144 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:35.368144 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:35.368144 ignition[1031]: INFO : mount: mount passed Feb 13 20:43:35.368144 ignition[1031]: INFO : Ignition finished successfully Feb 13 20:43:35.383684 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:43:35.391147 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:43:35.399137 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:43:35.414982 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1042) Feb 13 20:43:35.420617 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:43:35.420664 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:43:35.423044 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:43:35.427972 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:43:35.429400 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:43:35.451675 ignition[1059]: INFO : Ignition 2.19.0 Feb 13 20:43:35.451675 ignition[1059]: INFO : Stage: files Feb 13 20:43:35.455576 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:35.455576 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:35.455576 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:43:35.479926 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:43:35.479926 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:43:35.536609 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:43:35.540764 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:43:35.540764 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:43:35.537121 unknown[1059]: wrote ssh authorized keys file for user: core Feb 13 20:43:35.550991 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 20:43:35.555580 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 20:43:35.781071 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:43:36.049773 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 20:43:36.049773 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 20:43:36.631326 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:43:37.714101 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:43:37.714101 ignition[1059]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:43:37.750871 ignition[1059]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: files passed Feb 13 20:43:37.759748 ignition[1059]: INFO : Ignition finished successfully Feb 13 20:43:37.753061 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:43:37.775134 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:43:37.794475 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:43:37.797708 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:43:37.797807 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:43:37.813488 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:43:37.818503 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:43:37.822324 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:43:37.823743 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:43:37.833860 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:43:37.849144 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:43:37.871089 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:43:37.871198 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:43:37.879759 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:43:37.882219 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:43:37.884798 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:43:37.898372 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:43:37.911909 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:43:37.918078 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:43:37.929144 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:43:37.929313 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:43:37.929716 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:43:37.930086 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:43:37.930185 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:43:37.930906 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:43:37.931775 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:43:37.932676 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:43:37.933171 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:43:37.938237 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:43:37.938641 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:43:37.939038 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:43:37.939414 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:43:37.939793 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:43:37.940204 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:43:37.940596 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:43:37.940723 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:43:37.941408 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:43:37.941829 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:43:37.942586 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:43:37.976860 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:43:38.025511 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:43:38.025682 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:43:38.033473 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:43:38.033651 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:43:38.042247 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:43:38.042413 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:43:38.047373 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:43:38.052146 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:43:38.067473 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:43:38.069816 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:43:38.070042 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:43:38.079945 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:43:38.086824 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:43:38.100743 ignition[1111]: INFO : Ignition 2.19.0 Feb 13 20:43:38.100743 ignition[1111]: INFO : Stage: umount Feb 13 20:43:38.100743 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:38.100743 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:38.100743 ignition[1111]: INFO : umount: umount passed Feb 13 20:43:38.100743 ignition[1111]: INFO : Ignition finished successfully Feb 13 20:43:38.087003 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:43:38.090274 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:43:38.090424 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:43:38.095743 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:43:38.095863 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:43:38.114559 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:43:38.114648 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:43:38.121744 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:43:38.121861 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:43:38.123013 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:43:38.123051 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:43:38.123492 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:43:38.123526 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:43:38.123919 systemd[1]: Stopped target network.target - Network. Feb 13 20:43:38.124264 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:43:38.124309 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:43:38.124684 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:43:38.125060 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:43:38.140181 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:43:38.140729 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:43:38.145287 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:43:38.147576 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:43:38.154663 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:43:38.193812 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:43:38.193871 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:43:38.200622 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:43:38.203016 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:43:38.207493 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:43:38.207553 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:43:38.212460 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:43:38.219714 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:43:38.220000 systemd-networkd[872]: eth0: DHCPv6 lease lost Feb 13 20:43:38.227354 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:43:38.227945 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:43:38.228059 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:43:38.234830 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:43:38.234926 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:43:38.240866 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:43:38.242177 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:43:38.245858 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:43:38.245918 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:43:38.259153 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:43:38.259216 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:43:38.276063 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:43:38.280738 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:43:38.280807 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:43:38.289098 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:43:38.289158 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:43:38.296258 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:43:38.296313 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:43:38.303624 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:43:38.303682 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:43:38.311902 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:43:38.328062 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:43:38.328222 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:43:38.333858 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:43:38.333898 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:43:38.339234 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:43:38.339276 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:43:38.344305 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:43:38.346804 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:43:38.356371 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:43:38.356419 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:43:38.365618 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:43:38.365678 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:38.374084 kernel: hv_netvsc 7c1e5221-73cb-7c1e-5221-73cb7c1e5221 eth0: Data path switched from VF: enP11075s1 Feb 13 20:43:38.389123 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:43:38.391919 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:43:38.394639 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:43:38.397687 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:43:38.397744 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:38.403665 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:43:38.403751 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:43:38.419032 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:43:38.419150 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:43:38.424605 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:43:38.430114 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:43:38.456531 systemd[1]: Switching root. Feb 13 20:43:38.525699 systemd-journald[176]: Journal stopped Feb 13 20:43:28.084922 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:43:28.084978 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:43:28.084993 kernel: BIOS-provided physical RAM map: Feb 13 20:43:28.085004 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 20:43:28.085015 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 20:43:28.085025 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 20:43:28.085038 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Feb 13 20:43:28.085053 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Feb 13 20:43:28.085064 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 20:43:28.085076 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 20:43:28.085087 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 20:43:28.085098 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 20:43:28.085109 kernel: printk: bootconsole [earlyser0] enabled Feb 13 20:43:28.085121 kernel: NX (Execute Disable) protection: active Feb 13 20:43:28.085137 kernel: APIC: Static calls initialized Feb 13 20:43:28.085150 kernel: efi: EFI v2.7 by Microsoft Feb 13 20:43:28.085163 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Feb 13 20:43:28.085175 kernel: SMBIOS 3.1.0 present. Feb 13 20:43:28.085188 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 20:43:28.085201 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 20:43:28.085213 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 20:43:28.085225 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 20:43:28.085238 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 20:43:28.085250 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 20:43:28.085265 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 20:43:28.085277 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 20:43:28.085290 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 20:43:28.085304 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 20:43:28.085317 kernel: tsc: Detected 2593.907 MHz processor Feb 13 20:43:28.085329 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:43:28.085342 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:43:28.085355 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 20:43:28.085368 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 20:43:28.085384 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:43:28.085396 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 20:43:28.085409 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 20:43:28.085421 kernel: Using GB pages for direct mapping Feb 13 20:43:28.085434 kernel: Secure boot disabled Feb 13 20:43:28.085446 kernel: ACPI: Early table checksum verification disabled Feb 13 20:43:28.085460 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 20:43:28.085478 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085494 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085508 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 20:43:28.085521 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 20:43:28.085535 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085548 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085562 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085578 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085592 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085606 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085619 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:28.085633 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 20:43:28.085646 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 20:43:28.085660 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 20:43:28.085674 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 20:43:28.085690 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 20:43:28.085704 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 20:43:28.085717 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 20:43:28.085731 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 20:43:28.085744 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 20:43:28.085758 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 20:43:28.085771 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:43:28.085784 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:43:28.085798 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 20:43:28.085814 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 20:43:28.085828 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 20:43:28.085841 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 20:43:28.085855 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 20:43:28.085868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 20:43:28.085882 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 20:43:28.085896 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 20:43:28.085927 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 20:43:28.085942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 20:43:28.085958 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 20:43:28.085972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 20:43:28.085986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 20:43:28.085999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 20:43:28.086013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 20:43:28.086026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 20:43:28.086040 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 20:43:28.086054 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 13 20:43:28.086067 kernel: Zone ranges: Feb 13 20:43:28.086083 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:43:28.086097 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:43:28.086110 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 20:43:28.086123 kernel: Movable zone start for each node Feb 13 20:43:28.086136 kernel: Early memory node ranges Feb 13 20:43:28.086150 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 20:43:28.086164 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 20:43:28.086177 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 20:43:28.086190 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 20:43:28.086206 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 20:43:28.086220 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:43:28.086233 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 20:43:28.086247 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 20:43:28.086260 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 20:43:28.086274 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 20:43:28.086288 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:43:28.086301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:43:28.086315 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:43:28.086330 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 20:43:28.086344 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:43:28.086358 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 20:43:28.086371 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 20:43:28.086385 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:43:28.086398 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:43:28.086412 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:43:28.086426 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:43:28.086439 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:43:28.086455 kernel: Hyper-V: PV spinlocks enabled Feb 13 20:43:28.086468 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:43:28.086483 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:43:28.086497 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:43:28.086510 kernel: random: crng init done Feb 13 20:43:28.086524 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 20:43:28.086537 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:43:28.086551 kernel: Fallback order for Node 0: 0 Feb 13 20:43:28.086567 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 20:43:28.086591 kernel: Policy zone: Normal Feb 13 20:43:28.086605 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:43:28.086622 kernel: software IO TLB: area num 2. Feb 13 20:43:28.086637 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 310128K reserved, 0K cma-reserved) Feb 13 20:43:28.086651 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:43:28.086665 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:43:28.086680 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:43:28.086694 kernel: Dynamic Preempt: voluntary Feb 13 20:43:28.086708 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:43:28.086724 kernel: rcu: RCU event tracing is enabled. Feb 13 20:43:28.086741 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:43:28.086756 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:43:28.086770 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:43:28.086785 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:43:28.086799 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:43:28.086815 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:43:28.086828 kernel: Using NULL legacy PIC Feb 13 20:43:28.086853 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 20:43:28.086879 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:43:28.086912 kernel: Console: colour dummy device 80x25 Feb 13 20:43:28.086926 kernel: printk: console [tty1] enabled Feb 13 20:43:28.086939 kernel: printk: console [ttyS0] enabled Feb 13 20:43:28.086952 kernel: printk: bootconsole [earlyser0] disabled Feb 13 20:43:28.086965 kernel: ACPI: Core revision 20230628 Feb 13 20:43:28.086980 kernel: Failed to register legacy timer interrupt Feb 13 20:43:28.086997 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:43:28.087010 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 20:43:28.087021 kernel: Hyper-V: Using IPI hypercalls Feb 13 20:43:28.087035 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 20:43:28.087050 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 20:43:28.087064 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 20:43:28.087077 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 20:43:28.087090 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 20:43:28.087104 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 20:43:28.087121 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Feb 13 20:43:28.087138 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 20:43:28.087156 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 20:43:28.087168 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:43:28.087181 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:43:28.087198 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:43:28.087212 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:43:28.087226 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 20:43:28.087238 kernel: RETBleed: Vulnerable Feb 13 20:43:28.087255 kernel: Speculative Store Bypass: Vulnerable Feb 13 20:43:28.087268 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:43:28.087281 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:43:28.087295 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:43:28.087310 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:43:28.087322 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:43:28.087337 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 20:43:28.087352 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 20:43:28.087367 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 20:43:28.087382 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:43:28.087397 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 20:43:28.087415 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 20:43:28.087429 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 20:43:28.087444 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 20:43:28.087459 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:43:28.087474 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:43:28.087489 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:43:28.087504 kernel: landlock: Up and running. Feb 13 20:43:28.087519 kernel: SELinux: Initializing. Feb 13 20:43:28.087534 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:43:28.087549 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:43:28.087564 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 20:43:28.087579 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:43:28.087597 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:43:28.087613 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:43:28.087628 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 20:43:28.087643 kernel: signal: max sigframe size: 3632 Feb 13 20:43:28.087659 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:43:28.087674 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:43:28.087688 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:43:28.087701 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:43:28.087716 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:43:28.087734 kernel: .... node #0, CPUs: #1 Feb 13 20:43:28.087749 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 20:43:28.087765 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:43:28.087780 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:43:28.087795 kernel: smpboot: Max logical packages: 1 Feb 13 20:43:28.087810 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 13 20:43:28.087824 kernel: devtmpfs: initialized Feb 13 20:43:28.087839 kernel: x86/mm: Memory block size: 128MB Feb 13 20:43:28.087857 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 20:43:28.087870 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:43:28.087884 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:43:28.087898 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:43:28.088012 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:43:28.088022 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:43:28.088030 kernel: audit: type=2000 audit(1739479406.027:1): state=initialized audit_enabled=0 res=1 Feb 13 20:43:28.088038 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:43:28.088046 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:43:28.088058 kernel: cpuidle: using governor menu Feb 13 20:43:28.088066 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:43:28.088074 kernel: dca service started, version 1.12.1 Feb 13 20:43:28.088082 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 20:43:28.088093 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:43:28.088102 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:43:28.088113 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:43:28.088123 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:43:28.088133 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:43:28.088145 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:43:28.088156 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:43:28.088167 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:43:28.088177 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:43:28.088186 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:43:28.088197 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:43:28.088205 kernel: ACPI: Interpreter enabled Feb 13 20:43:28.088216 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:43:28.088224 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:43:28.088236 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:43:28.088244 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 20:43:28.088252 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 20:43:28.088261 kernel: iommu: Default domain type: Translated Feb 13 20:43:28.088272 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:43:28.088280 kernel: efivars: Registered efivars operations Feb 13 20:43:28.088291 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:43:28.088299 kernel: PCI: System does not support PCI Feb 13 20:43:28.088308 kernel: vgaarb: loaded Feb 13 20:43:28.088320 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 20:43:28.088328 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:43:28.088339 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:43:28.088347 kernel: pnp: PnP ACPI init Feb 13 20:43:28.088358 kernel: pnp: PnP ACPI: found 3 devices Feb 13 20:43:28.088367 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:43:28.088377 kernel: NET: Registered PF_INET protocol family Feb 13 20:43:28.088386 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:43:28.088395 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 20:43:28.088407 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:43:28.088416 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:43:28.088427 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:43:28.088435 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 20:43:28.088446 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:43:28.088454 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:43:28.088464 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:43:28.088473 kernel: NET: Registered PF_XDP protocol family Feb 13 20:43:28.088481 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:43:28.088500 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:43:28.088512 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Feb 13 20:43:28.088523 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:43:28.088532 kernel: Initialise system trusted keyrings Feb 13 20:43:28.088542 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 20:43:28.088552 kernel: Key type asymmetric registered Feb 13 20:43:28.088562 kernel: Asymmetric key parser 'x509' registered Feb 13 20:43:28.088570 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:43:28.088580 kernel: io scheduler mq-deadline registered Feb 13 20:43:28.088591 kernel: io scheduler kyber registered Feb 13 20:43:28.088601 kernel: io scheduler bfq registered Feb 13 20:43:28.088610 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:43:28.088618 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:43:28.088629 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:43:28.088638 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:43:28.088649 kernel: i8042: PNP: No PS/2 controller found. Feb 13 20:43:28.088794 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 20:43:28.088975 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T20:43:27 UTC (1739479407) Feb 13 20:43:28.089070 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 20:43:28.089083 kernel: intel_pstate: CPU model not supported Feb 13 20:43:28.089092 kernel: efifb: probing for efifb Feb 13 20:43:28.089101 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 20:43:28.089109 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 20:43:28.089117 kernel: efifb: scrolling: redraw Feb 13 20:43:28.089125 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 20:43:28.089144 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:43:28.089154 kernel: fb0: EFI VGA frame buffer device Feb 13 20:43:28.089162 kernel: pstore: Using crash dump compression: deflate Feb 13 20:43:28.089170 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 20:43:28.089178 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:43:28.089186 kernel: Segment Routing with IPv6 Feb 13 20:43:28.089201 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:43:28.089209 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:43:28.089218 kernel: Key type dns_resolver registered Feb 13 20:43:28.089231 kernel: IPI shorthand broadcast: enabled Feb 13 20:43:28.089241 kernel: sched_clock: Marking stable (793004100, 42132700)->(1027837300, -192700500) Feb 13 20:43:28.089250 kernel: registered taskstats version 1 Feb 13 20:43:28.089258 kernel: Loading compiled-in X.509 certificates Feb 13 20:43:28.089270 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:43:28.089278 kernel: Key type .fscrypt registered Feb 13 20:43:28.089288 kernel: Key type fscrypt-provisioning registered Feb 13 20:43:28.089297 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:43:28.089306 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:43:28.089318 kernel: ima: No architecture policies found Feb 13 20:43:28.089327 kernel: clk: Disabling unused clocks Feb 13 20:43:28.089338 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:43:28.089346 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:43:28.089357 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:43:28.089365 kernel: Run /init as init process Feb 13 20:43:28.089376 kernel: with arguments: Feb 13 20:43:28.089384 kernel: /init Feb 13 20:43:28.089395 kernel: with environment: Feb 13 20:43:28.089405 kernel: HOME=/ Feb 13 20:43:28.089415 kernel: TERM=linux Feb 13 20:43:28.089424 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:43:28.089435 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:43:28.089455 systemd[1]: Detected virtualization microsoft. Feb 13 20:43:28.089466 systemd[1]: Detected architecture x86-64. Feb 13 20:43:28.089477 systemd[1]: Running in initrd. Feb 13 20:43:28.089488 systemd[1]: No hostname configured, using default hostname. Feb 13 20:43:28.089501 systemd[1]: Hostname set to . Feb 13 20:43:28.089512 systemd[1]: Initializing machine ID from random generator. Feb 13 20:43:28.089521 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:43:28.089531 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:43:28.089541 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:43:28.089551 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:43:28.089562 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:43:28.089571 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:43:28.089584 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:43:28.089595 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:43:28.089606 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:43:28.089615 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:43:28.089626 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:43:28.089635 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:43:28.089647 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:43:28.089658 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:43:28.089669 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:43:28.089678 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:43:28.089690 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:43:28.089698 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:43:28.089709 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:43:28.089719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:43:28.089728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:43:28.089741 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:43:28.089750 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:43:28.089761 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:43:28.089770 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:43:28.089781 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:43:28.089790 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:43:28.089802 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:43:28.089810 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:43:28.089821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:28.089859 systemd-journald[176]: Collecting audit messages is disabled. Feb 13 20:43:28.089884 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:43:28.089895 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:43:28.089917 systemd-journald[176]: Journal started Feb 13 20:43:28.089944 systemd-journald[176]: Runtime Journal (/run/log/journal/353e4430d8b5446a9e603068e2761848) is 8.0M, max 158.8M, 150.8M free. Feb 13 20:43:28.087180 systemd-modules-load[177]: Inserted module 'overlay' Feb 13 20:43:28.098057 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:43:28.098683 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:43:28.111211 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:43:28.120623 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:43:28.125355 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:28.142304 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:43:28.169001 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:43:28.169029 kernel: Bridge firewalling registered Feb 13 20:43:28.154764 systemd-modules-load[177]: Inserted module 'br_netfilter' Feb 13 20:43:28.156404 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:43:28.164421 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:43:28.182239 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:43:28.186058 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:43:28.186445 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:43:28.206040 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:43:28.213121 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:28.220695 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:43:28.228065 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:43:28.237087 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:43:28.242612 dracut-cmdline[211]: dracut-dracut-053 Feb 13 20:43:28.245986 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:43:28.303596 systemd-resolved[215]: Positive Trust Anchors: Feb 13 20:43:28.305935 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:43:28.305994 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:43:28.331332 systemd-resolved[215]: Defaulting to hostname 'linux'. Feb 13 20:43:28.340199 kernel: SCSI subsystem initialized Feb 13 20:43:28.332615 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:43:28.343175 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:43:28.351927 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:43:28.362930 kernel: iscsi: registered transport (tcp) Feb 13 20:43:28.384887 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:43:28.384998 kernel: QLogic iSCSI HBA Driver Feb 13 20:43:28.420166 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:43:28.430033 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:43:28.458542 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:43:28.458626 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:43:28.461646 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:43:28.501944 kernel: raid6: avx512x4 gen() 18345 MB/s Feb 13 20:43:28.520922 kernel: raid6: avx512x2 gen() 18421 MB/s Feb 13 20:43:28.539924 kernel: raid6: avx512x1 gen() 18452 MB/s Feb 13 20:43:28.559923 kernel: raid6: avx2x4 gen() 18332 MB/s Feb 13 20:43:28.578919 kernel: raid6: avx2x2 gen() 18300 MB/s Feb 13 20:43:28.598776 kernel: raid6: avx2x1 gen() 14040 MB/s Feb 13 20:43:28.598834 kernel: raid6: using algorithm avx512x1 gen() 18452 MB/s Feb 13 20:43:28.620304 kernel: raid6: .... xor() 26888 MB/s, rmw enabled Feb 13 20:43:28.620344 kernel: raid6: using avx512x2 recovery algorithm Feb 13 20:43:28.641933 kernel: xor: automatically using best checksumming function avx Feb 13 20:43:28.792940 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:43:28.802354 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:43:28.812040 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:43:28.823630 systemd-udevd[395]: Using default interface naming scheme 'v255'. Feb 13 20:43:28.828041 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:43:28.848066 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:43:28.863718 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 20:43:28.889185 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:43:28.899421 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:43:28.940016 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:43:28.954154 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:43:28.982202 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:43:28.990990 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:43:28.997713 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:43:29.003386 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:43:29.012101 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:43:29.026927 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:43:29.039449 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:43:29.055073 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:43:29.059176 kernel: AES CTR mode by8 optimization enabled Feb 13 20:43:29.060761 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:43:29.061782 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:29.069637 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:43:29.077266 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 20:43:29.077519 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:43:29.077777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:29.080654 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:29.098187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:29.108694 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 20:43:29.108739 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 20:43:29.115613 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:43:29.121333 kernel: PTP clock support registered Feb 13 20:43:29.121104 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:29.132564 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 20:43:29.132603 kernel: hv_vmbus: registering driver hv_utils Feb 13 20:43:29.136965 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 20:43:29.137002 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 20:43:29.139237 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 20:43:29.917317 systemd-resolved[215]: Clock change detected. Flushing caches. Feb 13 20:43:29.921272 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:29.925332 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 20:43:29.935987 kernel: scsi host0: storvsc_host_t Feb 13 20:43:29.936200 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 20:43:29.936233 kernel: scsi host1: storvsc_host_t Feb 13 20:43:29.942238 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 20:43:29.944969 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 20:43:29.945001 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:43:29.951651 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 20:43:29.951684 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 20:43:29.968254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:29.980175 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 20:43:29.980694 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:43:29.996361 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 20:43:29.996413 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 20:43:30.011267 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 20:43:30.012298 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:43:30.012316 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 20:43:30.011194 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:30.036106 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 20:43:30.049027 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 20:43:30.049226 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 20:43:30.049391 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 20:43:30.049558 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 20:43:30.049724 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:30.049744 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 20:43:30.065996 kernel: hv_netvsc 7c1e5221-73cb-7c1e-5221-73cb7c1e5221 eth0: VF slot 1 added Feb 13 20:43:30.076416 kernel: hv_vmbus: registering driver hv_pci Feb 13 20:43:30.076463 kernel: hv_pci d7a6c328-2b43-457c-9840-37d87c153558: PCI VMBus probing: Using version 0x10004 Feb 13 20:43:30.122759 kernel: hv_pci d7a6c328-2b43-457c-9840-37d87c153558: PCI host bridge to bus 2b43:00 Feb 13 20:43:30.122945 kernel: pci_bus 2b43:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 20:43:30.123142 kernel: pci_bus 2b43:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 20:43:30.123304 kernel: pci 2b43:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 20:43:30.123492 kernel: pci 2b43:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 20:43:30.123670 kernel: pci 2b43:00:02.0: enabling Extended Tags Feb 13 20:43:30.123839 kernel: pci 2b43:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2b43:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 20:43:30.124048 kernel: pci_bus 2b43:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 20:43:30.124226 kernel: pci 2b43:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 20:43:30.294507 kernel: mlx5_core 2b43:00:02.0: enabling device (0000 -> 0002) Feb 13 20:43:30.518708 kernel: mlx5_core 2b43:00:02.0: firmware version: 14.30.5000 Feb 13 20:43:30.518879 kernel: hv_netvsc 7c1e5221-73cb-7c1e-5221-73cb7c1e5221 eth0: VF registering: eth1 Feb 13 20:43:30.519019 kernel: mlx5_core 2b43:00:02.0 eth1: joined to eth0 Feb 13 20:43:30.519156 kernel: mlx5_core 2b43:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 20:43:30.517601 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 20:43:30.529002 kernel: mlx5_core 2b43:00:02.0 enP11075s1: renamed from eth1 Feb 13 20:43:30.590192 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (448) Feb 13 20:43:30.606111 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 20:43:30.627457 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (452) Feb 13 20:43:30.627285 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 20:43:30.642786 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 20:43:30.650365 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 20:43:30.671094 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:43:30.683013 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:30.689974 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:31.698018 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:31.698785 disk-uuid[602]: The operation has completed successfully. Feb 13 20:43:31.795438 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:43:31.795551 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:43:31.816131 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:43:31.821919 sh[688]: Success Feb 13 20:43:31.852533 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:43:32.027309 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:43:32.037076 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:43:32.043748 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:43:32.056984 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:43:32.057019 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:43:32.062658 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:43:32.065562 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:43:32.067873 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:43:32.408847 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:43:32.414584 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:43:32.425330 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:43:32.431520 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:43:32.444912 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:43:32.444951 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:43:32.444984 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:43:32.464432 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:43:32.473416 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:43:32.479317 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:43:32.484287 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:43:32.497167 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:43:32.532816 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:43:32.543115 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:43:32.562006 systemd-networkd[872]: lo: Link UP Feb 13 20:43:32.562015 systemd-networkd[872]: lo: Gained carrier Feb 13 20:43:32.564112 systemd-networkd[872]: Enumeration completed Feb 13 20:43:32.564356 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:43:32.567117 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:43:32.567121 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:43:32.568689 systemd[1]: Reached target network.target - Network. Feb 13 20:43:32.624978 kernel: mlx5_core 2b43:00:02.0 enP11075s1: Link up Feb 13 20:43:32.658686 kernel: hv_netvsc 7c1e5221-73cb-7c1e-5221-73cb7c1e5221 eth0: Data path switched to VF: enP11075s1 Feb 13 20:43:32.658159 systemd-networkd[872]: enP11075s1: Link UP Feb 13 20:43:32.658286 systemd-networkd[872]: eth0: Link UP Feb 13 20:43:32.658469 systemd-networkd[872]: eth0: Gained carrier Feb 13 20:43:32.658482 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:43:32.670192 systemd-networkd[872]: enP11075s1: Gained carrier Feb 13 20:43:32.693016 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 20:43:33.232070 ignition[804]: Ignition 2.19.0 Feb 13 20:43:33.232082 ignition[804]: Stage: fetch-offline Feb 13 20:43:33.233621 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:43:33.232138 ignition[804]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:33.232149 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:33.232265 ignition[804]: parsed url from cmdline: "" Feb 13 20:43:33.232270 ignition[804]: no config URL provided Feb 13 20:43:33.232278 ignition[804]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:43:33.232288 ignition[804]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:43:33.232295 ignition[804]: failed to fetch config: resource requires networking Feb 13 20:43:33.232516 ignition[804]: Ignition finished successfully Feb 13 20:43:33.261136 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:43:33.276327 ignition[882]: Ignition 2.19.0 Feb 13 20:43:33.276338 ignition[882]: Stage: fetch Feb 13 20:43:33.276545 ignition[882]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:33.276559 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:33.276646 ignition[882]: parsed url from cmdline: "" Feb 13 20:43:33.276649 ignition[882]: no config URL provided Feb 13 20:43:33.276653 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:43:33.276660 ignition[882]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:43:33.276680 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 20:43:33.360488 ignition[882]: GET result: OK Feb 13 20:43:33.360612 ignition[882]: config has been read from IMDS userdata Feb 13 20:43:33.360652 ignition[882]: parsing config with SHA512: 91288b374a6cef313454edcfe0567a69e869e1809e1f5cd6c3c5b22f684d11e90513327dbbd729b7278a3f424f92e9abcc54fa8b0e6c98e484cef5067ed06a68 Feb 13 20:43:33.366683 unknown[882]: fetched base config from "system" Feb 13 20:43:33.366698 unknown[882]: fetched base config from "system" Feb 13 20:43:33.367199 ignition[882]: fetch: fetch complete Feb 13 20:43:33.366707 unknown[882]: fetched user config from "azure" Feb 13 20:43:33.367205 ignition[882]: fetch: fetch passed Feb 13 20:43:33.368885 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:43:33.367255 ignition[882]: Ignition finished successfully Feb 13 20:43:33.379173 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:43:33.394752 ignition[888]: Ignition 2.19.0 Feb 13 20:43:33.394765 ignition[888]: Stage: kargs Feb 13 20:43:33.395001 ignition[888]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:33.395015 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:33.395894 ignition[888]: kargs: kargs passed Feb 13 20:43:33.395937 ignition[888]: Ignition finished successfully Feb 13 20:43:33.406794 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:43:33.416302 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:43:33.432390 ignition[894]: Ignition 2.19.0 Feb 13 20:43:33.432405 ignition[894]: Stage: disks Feb 13 20:43:33.434578 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:43:33.432638 ignition[894]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:33.438788 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:43:33.432652 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:33.433535 ignition[894]: disks: disks passed Feb 13 20:43:33.433578 ignition[894]: Ignition finished successfully Feb 13 20:43:33.453046 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:43:33.459101 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:43:33.461809 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:43:33.466610 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:43:33.480121 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:43:33.531028 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 20:43:33.535869 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:43:33.549067 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:43:33.640980 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:43:33.641107 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:43:33.641804 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:43:33.672067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:43:33.677290 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:43:33.685167 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:43:33.698813 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (913) Feb 13 20:43:33.698853 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:43:33.698874 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:43:33.700758 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:43:33.701605 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:43:33.708990 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:43:33.702141 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:43:33.716209 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:43:33.718418 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:43:33.731114 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:43:34.319354 coreos-metadata[915]: Feb 13 20:43:34.319 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 20:43:34.323417 coreos-metadata[915]: Feb 13 20:43:34.322 INFO Fetch successful Feb 13 20:43:34.323417 coreos-metadata[915]: Feb 13 20:43:34.322 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 20:43:34.333218 coreos-metadata[915]: Feb 13 20:43:34.333 INFO Fetch successful Feb 13 20:43:34.349136 coreos-metadata[915]: Feb 13 20:43:34.349 INFO wrote hostname ci-4081.3.1-a-d679334e6e to /sysroot/etc/hostname Feb 13 20:43:34.351491 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:43:34.448319 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:43:34.469131 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:43:34.483324 systemd-networkd[872]: enP11075s1: Gained IPv6LL Feb 13 20:43:34.487039 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:43:34.505424 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:43:34.547219 systemd-networkd[872]: eth0: Gained IPv6LL Feb 13 20:43:35.304361 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:43:35.313121 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:43:35.321064 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:43:35.329807 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:43:35.330622 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:43:35.365267 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:43:35.368144 ignition[1031]: INFO : Ignition 2.19.0 Feb 13 20:43:35.368144 ignition[1031]: INFO : Stage: mount Feb 13 20:43:35.368144 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:35.368144 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:35.368144 ignition[1031]: INFO : mount: mount passed Feb 13 20:43:35.368144 ignition[1031]: INFO : Ignition finished successfully Feb 13 20:43:35.383684 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:43:35.391147 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:43:35.399137 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:43:35.414982 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1042) Feb 13 20:43:35.420617 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:43:35.420664 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:43:35.423044 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:43:35.427972 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:43:35.429400 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:43:35.451675 ignition[1059]: INFO : Ignition 2.19.0 Feb 13 20:43:35.451675 ignition[1059]: INFO : Stage: files Feb 13 20:43:35.455576 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:35.455576 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:35.455576 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:43:35.479926 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:43:35.479926 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:43:35.536609 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:43:35.540764 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:43:35.540764 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:43:35.537121 unknown[1059]: wrote ssh authorized keys file for user: core Feb 13 20:43:35.550991 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 20:43:35.555580 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 20:43:35.781071 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:43:36.049773 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 20:43:36.049773 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:43:36.064972 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 20:43:36.631326 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:43:37.714101 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:43:37.714101 ignition[1059]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:43:37.750871 ignition[1059]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:43:37.759748 ignition[1059]: INFO : files: files passed Feb 13 20:43:37.759748 ignition[1059]: INFO : Ignition finished successfully Feb 13 20:43:37.753061 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:43:37.775134 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:43:37.794475 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:43:37.797708 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:43:37.797807 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:43:37.813488 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:43:37.818503 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:43:37.822324 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:43:37.823743 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:43:37.833860 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:43:37.849144 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:43:37.871089 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:43:37.871198 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:43:37.879759 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:43:37.882219 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:43:37.884798 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:43:37.898372 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:43:37.911909 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:43:37.918078 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:43:37.929144 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:43:37.929313 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:43:37.929716 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:43:37.930086 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:43:37.930185 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:43:37.930906 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:43:37.931775 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:43:37.932676 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:43:37.933171 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:43:37.938237 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:43:37.938641 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:43:37.939038 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:43:37.939414 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:43:37.939793 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:43:37.940204 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:43:37.940596 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:43:37.940723 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:43:37.941408 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:43:37.941829 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:43:37.942586 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:43:37.976860 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:43:38.025511 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:43:38.025682 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:43:38.033473 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:43:38.033651 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:43:38.042247 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:43:38.042413 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:43:38.047373 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:43:38.052146 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:43:38.067473 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:43:38.069816 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:43:38.070042 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:43:38.079945 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:43:38.086824 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:43:38.100743 ignition[1111]: INFO : Ignition 2.19.0 Feb 13 20:43:38.100743 ignition[1111]: INFO : Stage: umount Feb 13 20:43:38.100743 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:38.100743 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:38.100743 ignition[1111]: INFO : umount: umount passed Feb 13 20:43:38.100743 ignition[1111]: INFO : Ignition finished successfully Feb 13 20:43:38.087003 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:43:38.090274 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:43:38.090424 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:43:38.095743 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:43:38.095863 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:43:38.114559 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:43:38.114648 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:43:38.121744 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:43:38.121861 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:43:38.123013 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:43:38.123051 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:43:38.123492 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:43:38.123526 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:43:38.123919 systemd[1]: Stopped target network.target - Network. Feb 13 20:43:38.124264 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:43:38.124309 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:43:38.124684 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:43:38.125060 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:43:38.140181 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:43:38.140729 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:43:38.145287 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:43:38.147576 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:43:38.154663 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:43:38.193812 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:43:38.193871 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:43:38.200622 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:43:38.203016 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:43:38.207493 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:43:38.207553 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:43:38.212460 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:43:38.219714 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:43:38.220000 systemd-networkd[872]: eth0: DHCPv6 lease lost Feb 13 20:43:38.227354 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:43:38.227945 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:43:38.228059 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:43:38.234830 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:43:38.234926 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:43:38.240866 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:43:38.242177 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:43:38.245858 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:43:38.245918 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:43:38.259153 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:43:38.259216 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:43:38.276063 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:43:38.280738 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:43:38.280807 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:43:38.289098 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:43:38.289158 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:43:38.296258 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:43:38.296313 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:43:38.303624 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:43:38.303682 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:43:38.311902 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:43:38.328062 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:43:38.328222 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:43:38.333858 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:43:38.333898 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:43:38.339234 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:43:38.339276 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:43:38.344305 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:43:38.346804 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:43:38.356371 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:43:38.356419 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:43:38.365618 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:43:38.365678 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:38.374084 kernel: hv_netvsc 7c1e5221-73cb-7c1e-5221-73cb7c1e5221 eth0: Data path switched from VF: enP11075s1 Feb 13 20:43:38.389123 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:43:38.391919 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:43:38.394639 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:43:38.397687 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:43:38.397744 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:38.403665 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:43:38.403751 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:43:38.419032 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:43:38.419150 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:43:38.424605 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:43:38.430114 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:43:38.456531 systemd[1]: Switching root. Feb 13 20:43:38.525699 systemd-journald[176]: Journal stopped Feb 13 20:43:44.740062 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Feb 13 20:43:44.740090 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:43:44.740103 kernel: SELinux: policy capability open_perms=1 Feb 13 20:43:44.740113 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:43:44.740122 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:43:44.740132 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:43:44.740143 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:43:44.740156 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:43:44.740165 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:43:44.740176 kernel: audit: type=1403 audit(1739479420.263:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:43:44.740189 systemd[1]: Successfully loaded SELinux policy in 124.940ms. Feb 13 20:43:44.740205 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.624ms. Feb 13 20:43:44.740227 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:43:44.740247 systemd[1]: Detected virtualization microsoft. Feb 13 20:43:44.740271 systemd[1]: Detected architecture x86-64. Feb 13 20:43:44.740290 systemd[1]: Detected first boot. Feb 13 20:43:44.740314 systemd[1]: Hostname set to . Feb 13 20:43:44.740333 systemd[1]: Initializing machine ID from random generator. Feb 13 20:43:44.740355 zram_generator::config[1155]: No configuration found. Feb 13 20:43:44.740378 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:43:44.740400 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:43:44.740420 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:43:44.740443 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:43:44.740463 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:43:44.740483 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:43:44.740503 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:43:44.740534 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:43:44.740555 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:43:44.740577 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:43:44.740598 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:43:44.740620 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:43:44.740640 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:43:44.740659 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:43:44.740681 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:43:44.740707 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:43:44.740728 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:43:44.740751 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:43:44.740770 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:43:44.740791 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:43:44.740812 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:43:44.740839 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:43:44.740863 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:43:44.740889 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:43:44.740911 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:43:44.740931 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:43:44.740974 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:43:44.740991 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:43:44.741008 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:43:44.741023 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:43:44.741042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:43:44.741059 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:43:44.741077 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:43:44.741094 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:43:44.741111 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:43:44.741131 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:43:44.741148 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:43:44.741165 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:43:44.741182 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:43:44.741199 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:43:44.741215 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:43:44.741231 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:43:44.741246 systemd[1]: Reached target machines.target - Containers. Feb 13 20:43:44.741265 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:43:44.741282 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:43:44.741300 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:43:44.741317 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:43:44.741345 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:43:44.741361 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:43:44.741379 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:43:44.741393 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:43:44.741408 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:43:44.741427 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:43:44.741443 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:43:44.741459 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:43:44.741475 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:43:44.741491 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:43:44.741507 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:43:44.741523 kernel: loop: module loaded Feb 13 20:43:44.741537 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:43:44.741556 kernel: fuse: init (API version 7.39) Feb 13 20:43:44.741572 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:43:44.741613 systemd-journald[1240]: Collecting audit messages is disabled. Feb 13 20:43:44.741645 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:43:44.741683 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:43:44.741701 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:43:44.741720 systemd[1]: Stopped verity-setup.service. Feb 13 20:43:44.741739 systemd-journald[1240]: Journal started Feb 13 20:43:44.741773 systemd-journald[1240]: Runtime Journal (/run/log/journal/43c02a2663c5477f93d13ffe8b6c7c39) is 8.0M, max 158.8M, 150.8M free. Feb 13 20:43:44.052201 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:43:44.207652 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 20:43:44.208081 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:43:44.753133 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:43:44.758979 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:43:44.763268 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:43:44.766040 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:43:44.781769 kernel: ACPI: bus type drm_connector registered Feb 13 20:43:44.770595 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:43:44.773194 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:43:44.776210 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:43:44.782492 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:43:44.785252 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:43:44.788485 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:43:44.788650 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:43:44.792163 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:43:44.792318 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:43:44.795796 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:43:44.796006 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:43:44.799188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:43:44.799383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:43:44.802925 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:43:44.803149 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:43:44.806735 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:43:44.807151 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:43:44.810587 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:43:44.816733 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:43:44.827869 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:43:44.841386 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:43:44.852998 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:43:44.862095 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:43:44.865077 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:43:44.865193 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:43:44.870473 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:43:44.880134 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:43:44.884101 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:43:44.886680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:43:44.891752 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:43:44.895871 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:43:44.898786 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:43:44.903180 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:43:44.906635 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:43:44.910172 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:43:44.914658 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:43:44.919690 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:43:44.923017 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:43:44.928183 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:43:44.932073 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:43:44.935639 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:43:44.940560 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:43:44.950813 systemd-journald[1240]: Time spent on flushing to /var/log/journal/43c02a2663c5477f93d13ffe8b6c7c39 is 29.814ms for 959 entries. Feb 13 20:43:44.950813 systemd-journald[1240]: System Journal (/var/log/journal/43c02a2663c5477f93d13ffe8b6c7c39) is 8.0M, max 2.6G, 2.6G free. Feb 13 20:43:45.005270 systemd-journald[1240]: Received client request to flush runtime journal. Feb 13 20:43:45.005341 kernel: loop0: detected capacity change from 0 to 142488 Feb 13 20:43:44.953151 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:43:44.971119 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:43:44.977322 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:43:44.988161 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:43:45.008422 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:43:45.013579 udevadm[1300]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:43:45.036572 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:43:45.037139 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:43:45.099087 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:43:45.376184 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:43:45.390191 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:43:45.417982 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:43:45.438981 kernel: loop1: detected capacity change from 0 to 218376 Feb 13 20:43:45.449348 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Feb 13 20:43:45.449372 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Feb 13 20:43:45.457223 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:43:45.496975 kernel: loop2: detected capacity change from 0 to 31056 Feb 13 20:43:45.865988 kernel: loop3: detected capacity change from 0 to 140768 Feb 13 20:43:46.147544 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:43:46.158111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:43:46.180899 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Feb 13 20:43:46.321987 kernel: loop4: detected capacity change from 0 to 142488 Feb 13 20:43:46.332983 kernel: loop5: detected capacity change from 0 to 218376 Feb 13 20:43:46.342982 kernel: loop6: detected capacity change from 0 to 31056 Feb 13 20:43:46.351975 kernel: loop7: detected capacity change from 0 to 140768 Feb 13 20:43:46.408952 (sd-merge)[1318]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 20:43:46.409626 (sd-merge)[1318]: Merged extensions into '/usr'. Feb 13 20:43:46.413629 systemd[1]: Reloading requested from client PID 1289 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:43:46.413645 systemd[1]: Reloading... Feb 13 20:43:46.472266 zram_generator::config[1341]: No configuration found. Feb 13 20:43:46.717115 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:43:46.778391 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:43:46.847005 kernel: hv_vmbus: registering driver hv_balloon Feb 13 20:43:46.847082 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 20:43:46.847101 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 20:43:46.853616 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 20:43:46.858978 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 20:43:46.865349 kernel: Console: switching to colour dummy device 80x25 Feb 13 20:43:46.873651 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:43:46.872197 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:43:46.872413 systemd[1]: Reloading finished in 458 ms. Feb 13 20:43:46.901490 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:43:46.905558 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:43:46.992465 systemd[1]: Starting ensure-sysext.service... Feb 13 20:43:47.013163 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:43:47.019170 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:43:47.039151 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:47.051334 systemd[1]: Reloading requested from client PID 1447 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:43:47.051351 systemd[1]: Reloading... Feb 13 20:43:47.086365 systemd-tmpfiles[1450]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:43:47.086904 systemd-tmpfiles[1450]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:43:47.088236 systemd-tmpfiles[1450]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:43:47.088644 systemd-tmpfiles[1450]: ACLs are not supported, ignoring. Feb 13 20:43:47.088726 systemd-tmpfiles[1450]: ACLs are not supported, ignoring. Feb 13 20:43:47.142926 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1401) Feb 13 20:43:47.136550 systemd-tmpfiles[1450]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:43:47.136558 systemd-tmpfiles[1450]: Skipping /boot Feb 13 20:43:47.173287 systemd-tmpfiles[1450]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:43:47.173999 systemd-tmpfiles[1450]: Skipping /boot Feb 13 20:43:47.195977 zram_generator::config[1480]: No configuration found. Feb 13 20:43:47.383299 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Feb 13 20:43:47.504668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:43:47.589372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 20:43:47.593189 systemd[1]: Reloading finished in 541 ms. Feb 13 20:43:47.618563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:43:47.623188 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:47.646502 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:43:47.655141 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:43:47.662233 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:43:47.667232 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:43:47.670668 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:43:47.674438 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:43:47.684555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:43:47.692051 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:43:47.696552 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:43:47.699354 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:43:47.701223 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:43:47.708287 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:43:47.720587 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:43:47.728266 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:43:47.743608 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:43:47.754435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:43:47.754635 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:47.760726 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:47.765723 lvm[1578]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:43:47.770608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:47.774063 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:43:47.778820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:43:47.779337 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:43:47.783512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:43:47.784237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:43:47.794267 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:43:47.794443 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:43:47.809148 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:43:47.819024 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:43:47.833596 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:47.841138 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:43:47.848230 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:43:47.851078 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:43:47.851356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:43:47.859193 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:43:47.863729 augenrules[1609]: No rules Feb 13 20:43:47.867122 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:43:47.879148 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:43:47.885236 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:43:47.900087 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:43:47.906799 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:43:47.906889 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:43:47.909582 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:43:47.909911 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:43:47.914832 lvm[1614]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:43:47.922437 systemd[1]: Finished ensure-sysext.service. Feb 13 20:43:47.925339 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:43:47.929627 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:43:47.945600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:43:47.945788 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:43:47.949340 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:43:47.949526 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:43:47.956996 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:43:47.961380 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:43:47.961559 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:43:47.968466 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:43:47.968635 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:43:47.974946 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:43:47.975628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:43:48.039164 systemd-resolved[1589]: Positive Trust Anchors: Feb 13 20:43:48.039188 systemd-resolved[1589]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:43:48.039238 systemd-resolved[1589]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:43:48.058088 systemd-resolved[1589]: Using system hostname 'ci-4081.3.1-a-d679334e6e'. Feb 13 20:43:48.059407 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:43:48.062910 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:43:48.088916 systemd-networkd[1448]: lo: Link UP Feb 13 20:43:48.088925 systemd-networkd[1448]: lo: Gained carrier Feb 13 20:43:48.091374 systemd-networkd[1448]: Enumeration completed Feb 13 20:43:48.091486 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:43:48.091768 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:43:48.091772 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:43:48.094658 systemd[1]: Reached target network.target - Network. Feb 13 20:43:48.102136 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:43:48.148978 kernel: mlx5_core 2b43:00:02.0 enP11075s1: Link up Feb 13 20:43:48.169992 kernel: hv_netvsc 7c1e5221-73cb-7c1e-5221-73cb7c1e5221 eth0: Data path switched to VF: enP11075s1 Feb 13 20:43:48.170944 systemd-networkd[1448]: enP11075s1: Link UP Feb 13 20:43:48.171128 systemd-networkd[1448]: eth0: Link UP Feb 13 20:43:48.171134 systemd-networkd[1448]: eth0: Gained carrier Feb 13 20:43:48.171158 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:43:48.177355 systemd-networkd[1448]: enP11075s1: Gained carrier Feb 13 20:43:48.213043 systemd-networkd[1448]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 20:43:48.723533 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:43:48.727768 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:43:50.035291 systemd-networkd[1448]: enP11075s1: Gained IPv6LL Feb 13 20:43:50.035605 systemd-networkd[1448]: eth0: Gained IPv6LL Feb 13 20:43:50.037731 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:43:50.042497 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:43:51.259368 ldconfig[1285]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:43:51.270868 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:43:51.281219 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:43:51.292566 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:43:51.295697 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:43:51.298359 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:43:51.301357 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:43:51.305229 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:43:51.307878 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:43:51.310793 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:43:51.313831 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:43:51.313871 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:43:51.316036 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:43:51.319062 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:43:51.323051 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:43:51.334136 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:43:51.337700 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:43:51.340319 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:43:51.342691 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:43:51.345116 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:43:51.345147 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:43:51.353060 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 20:43:51.357087 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:43:51.369312 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:43:51.382125 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:43:51.388079 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:43:51.401594 (chronyd)[1646]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 20:43:51.403145 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:43:51.405858 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:43:51.405924 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 20:43:51.408142 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 20:43:51.413127 jq[1652]: false Feb 13 20:43:51.414476 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 20:43:51.419319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:43:51.423209 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:43:51.434358 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:43:51.441113 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:43:51.443775 chronyd[1662]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 20:43:51.445265 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:43:51.459152 KVP[1654]: KVP starting; pid is:1654 Feb 13 20:43:51.461161 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:43:51.469117 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:43:51.472153 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:43:51.472747 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:43:51.476140 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:43:51.488429 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:43:51.492939 chronyd[1662]: Timezone right/UTC failed leap second check, ignoring Feb 13 20:43:51.493876 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:43:51.493159 chronyd[1662]: Loaded seccomp filter (level 2) Feb 13 20:43:51.501913 kernel: hv_utils: KVP IC version 4.0 Feb 13 20:43:51.495132 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:43:51.497462 KVP[1654]: KVP LIC Version: 3.1 Feb 13 20:43:51.495507 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 20:43:51.503566 jq[1670]: true Feb 13 20:43:51.520852 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:43:51.521095 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:43:51.551900 update_engine[1669]: I20250213 20:43:51.551735 1669 main.cc:92] Flatcar Update Engine starting Feb 13 20:43:51.565049 extend-filesystems[1653]: Found loop4 Feb 13 20:43:51.565049 extend-filesystems[1653]: Found loop5 Feb 13 20:43:51.565049 extend-filesystems[1653]: Found loop6 Feb 13 20:43:51.565049 extend-filesystems[1653]: Found loop7 Feb 13 20:43:51.565049 extend-filesystems[1653]: Found sda Feb 13 20:43:51.565049 extend-filesystems[1653]: Found sda1 Feb 13 20:43:51.565049 extend-filesystems[1653]: Found sda2 Feb 13 20:43:51.565049 extend-filesystems[1653]: Found sda3 Feb 13 20:43:51.565049 extend-filesystems[1653]: Found usr Feb 13 20:43:51.565049 extend-filesystems[1653]: Found sda4 Feb 13 20:43:51.565049 extend-filesystems[1653]: Found sda6 Feb 13 20:43:51.565049 extend-filesystems[1653]: Found sda7 Feb 13 20:43:51.565049 extend-filesystems[1653]: Found sda9 Feb 13 20:43:51.664624 extend-filesystems[1653]: Checking size of /dev/sda9 Feb 13 20:43:51.681845 jq[1679]: true Feb 13 20:43:51.681955 update_engine[1669]: I20250213 20:43:51.608706 1669 update_check_scheduler.cc:74] Next update check in 4m18s Feb 13 20:43:51.572317 (ntainerd)[1690]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:43:51.584449 dbus-daemon[1649]: [system] SELinux support is enabled Feb 13 20:43:51.684548 extend-filesystems[1653]: Old size kept for /dev/sda9 Feb 13 20:43:51.684548 extend-filesystems[1653]: Found sr0 Feb 13 20:43:51.576320 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:43:51.697353 tar[1678]: linux-amd64/LICENSE Feb 13 20:43:51.697353 tar[1678]: linux-amd64/helm Feb 13 20:43:51.576551 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:43:51.585217 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:43:51.602536 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:43:51.602563 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:43:51.620995 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:43:51.621023 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:43:51.625417 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:43:51.631769 systemd-logind[1667]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:43:51.634201 systemd-logind[1667]: New seat seat0. Feb 13 20:43:51.638126 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:43:51.640942 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:43:51.660682 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:43:51.674386 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:43:51.674594 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:43:51.790065 coreos-metadata[1648]: Feb 13 20:43:51.781 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 20:43:51.790065 coreos-metadata[1648]: Feb 13 20:43:51.785 INFO Fetch successful Feb 13 20:43:51.790065 coreos-metadata[1648]: Feb 13 20:43:51.789 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 20:43:51.798890 coreos-metadata[1648]: Feb 13 20:43:51.794 INFO Fetch successful Feb 13 20:43:51.798890 coreos-metadata[1648]: Feb 13 20:43:51.797 INFO Fetching http://168.63.129.16/machine/a9e434b3-25a6-4b61-97ca-c60dcce69767/82612b28%2D9431%2D4225%2D9747%2D1f8dca8eee96.%5Fci%2D4081.3.1%2Da%2Dd679334e6e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 20:43:51.799907 coreos-metadata[1648]: Feb 13 20:43:51.799 INFO Fetch successful Feb 13 20:43:51.807850 coreos-metadata[1648]: Feb 13 20:43:51.801 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 20:43:51.807926 bash[1720]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:43:51.811674 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:43:51.817460 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:43:51.822713 coreos-metadata[1648]: Feb 13 20:43:51.821 INFO Fetch successful Feb 13 20:43:51.823983 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1727) Feb 13 20:43:51.869689 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:43:51.878206 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:43:52.002667 sshd_keygen[1683]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:43:52.065543 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:43:52.079275 locksmithd[1700]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:43:52.081819 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:43:52.086782 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 20:43:52.118555 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:43:52.118891 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:43:52.130240 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:43:52.169198 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 20:43:52.176013 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:43:52.194573 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:43:52.202277 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:43:52.209461 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:43:52.611815 tar[1678]: linux-amd64/README.md Feb 13 20:43:52.625075 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:43:52.843202 containerd[1690]: time="2025-02-13T20:43:52.843028900Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:43:52.879671 containerd[1690]: time="2025-02-13T20:43:52.879558900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:43:52.881779 containerd[1690]: time="2025-02-13T20:43:52.881739500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:43:52.882295 containerd[1690]: time="2025-02-13T20:43:52.881894100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:43:52.882295 containerd[1690]: time="2025-02-13T20:43:52.881923800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:43:52.882295 containerd[1690]: time="2025-02-13T20:43:52.882115400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:43:52.882295 containerd[1690]: time="2025-02-13T20:43:52.882139500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:43:52.882295 containerd[1690]: time="2025-02-13T20:43:52.882210800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:43:52.882295 containerd[1690]: time="2025-02-13T20:43:52.882227600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:43:52.882700 containerd[1690]: time="2025-02-13T20:43:52.882676500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:43:52.882794 containerd[1690]: time="2025-02-13T20:43:52.882775900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:43:52.882877 containerd[1690]: time="2025-02-13T20:43:52.882860400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:43:52.882981 containerd[1690]: time="2025-02-13T20:43:52.882923200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:43:52.883415 containerd[1690]: time="2025-02-13T20:43:52.883105400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:43:52.883415 containerd[1690]: time="2025-02-13T20:43:52.883365800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:43:52.883700 containerd[1690]: time="2025-02-13T20:43:52.883656500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:43:52.883700 containerd[1690]: time="2025-02-13T20:43:52.883688600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:43:52.883830 containerd[1690]: time="2025-02-13T20:43:52.883808500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:43:52.883888 containerd[1690]: time="2025-02-13T20:43:52.883872900Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:43:52.895030 containerd[1690]: time="2025-02-13T20:43:52.894789800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:43:52.895030 containerd[1690]: time="2025-02-13T20:43:52.894839200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:43:52.895030 containerd[1690]: time="2025-02-13T20:43:52.894860300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:43:52.895030 containerd[1690]: time="2025-02-13T20:43:52.894884100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:43:52.895030 containerd[1690]: time="2025-02-13T20:43:52.894903700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:43:52.895217 containerd[1690]: time="2025-02-13T20:43:52.895134400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895504800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895638500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895660600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895679100Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895697300Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895742100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895759500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895777500Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895796200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895812900Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895829400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895844700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895869100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.895982 containerd[1690]: time="2025-02-13T20:43:52.895886300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.896491 containerd[1690]: time="2025-02-13T20:43:52.895902700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.896491 containerd[1690]: time="2025-02-13T20:43:52.895920000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.896491 containerd[1690]: time="2025-02-13T20:43:52.895936100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.896898 containerd[1690]: time="2025-02-13T20:43:52.895954000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.896898 containerd[1690]: time="2025-02-13T20:43:52.896653500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.896898 containerd[1690]: time="2025-02-13T20:43:52.896679200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.896898 containerd[1690]: time="2025-02-13T20:43:52.896714900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.896898 containerd[1690]: time="2025-02-13T20:43:52.896737300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.896898 containerd[1690]: time="2025-02-13T20:43:52.896754600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.896898 containerd[1690]: time="2025-02-13T20:43:52.896783600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.896898 containerd[1690]: time="2025-02-13T20:43:52.896801600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.896898 containerd[1690]: time="2025-02-13T20:43:52.896835300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:43:52.896898 containerd[1690]: time="2025-02-13T20:43:52.896877600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.897274 containerd[1690]: time="2025-02-13T20:43:52.896915300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.897274 containerd[1690]: time="2025-02-13T20:43:52.896945200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:43:52.897274 containerd[1690]: time="2025-02-13T20:43:52.897022800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:43:52.897274 containerd[1690]: time="2025-02-13T20:43:52.897112500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:43:52.897274 containerd[1690]: time="2025-02-13T20:43:52.897129600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:43:52.897274 containerd[1690]: time="2025-02-13T20:43:52.897147000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:43:52.897274 containerd[1690]: time="2025-02-13T20:43:52.897161200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.897274 containerd[1690]: time="2025-02-13T20:43:52.897190300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:43:52.897274 containerd[1690]: time="2025-02-13T20:43:52.897206400Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:43:52.897274 containerd[1690]: time="2025-02-13T20:43:52.897220700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:43:52.899128 containerd[1690]: time="2025-02-13T20:43:52.897661000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:43:52.899128 containerd[1690]: time="2025-02-13T20:43:52.897760300Z" level=info msg="Connect containerd service" Feb 13 20:43:52.899128 containerd[1690]: time="2025-02-13T20:43:52.897817300Z" level=info msg="using legacy CRI server" Feb 13 20:43:52.899128 containerd[1690]: time="2025-02-13T20:43:52.897827200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:43:52.899128 containerd[1690]: time="2025-02-13T20:43:52.897954000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:43:52.899128 containerd[1690]: time="2025-02-13T20:43:52.898733700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:43:52.899128 containerd[1690]: time="2025-02-13T20:43:52.898866300Z" level=info msg="Start subscribing containerd event" Feb 13 20:43:52.899128 containerd[1690]: time="2025-02-13T20:43:52.898927900Z" level=info msg="Start recovering state" Feb 13 20:43:52.899128 containerd[1690]: time="2025-02-13T20:43:52.899019400Z" level=info msg="Start event monitor" Feb 13 20:43:52.899128 containerd[1690]: time="2025-02-13T20:43:52.899037400Z" level=info msg="Start snapshots syncer" Feb 13 20:43:52.899128 containerd[1690]: time="2025-02-13T20:43:52.899049400Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:43:52.899128 containerd[1690]: time="2025-02-13T20:43:52.899059100Z" level=info msg="Start streaming server" Feb 13 20:43:52.899719 containerd[1690]: time="2025-02-13T20:43:52.899682000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:43:52.901070 containerd[1690]: time="2025-02-13T20:43:52.899765000Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:43:52.901070 containerd[1690]: time="2025-02-13T20:43:52.900339500Z" level=info msg="containerd successfully booted in 0.058580s" Feb 13 20:43:52.900588 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:43:53.088814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:43:53.093416 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:43:53.094283 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:43:53.098785 systemd[1]: Startup finished in 694ms (firmware) + 26.463s (loader) + 930ms (kernel) + 11.671s (initrd) + 12.958s (userspace) = 52.719s. Feb 13 20:43:53.340215 login[1791]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:43:53.342565 login[1792]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:43:53.355092 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:43:53.361247 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:43:53.367010 systemd-logind[1667]: New session 1 of user core. Feb 13 20:43:53.376257 systemd-logind[1667]: New session 2 of user core. Feb 13 20:43:53.389049 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:43:53.396492 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:43:53.402547 (systemd)[1820]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:43:53.655472 systemd[1820]: Queued start job for default target default.target. Feb 13 20:43:53.661421 systemd[1820]: Created slice app.slice - User Application Slice. Feb 13 20:43:53.661454 systemd[1820]: Reached target paths.target - Paths. Feb 13 20:43:53.661472 systemd[1820]: Reached target timers.target - Timers. Feb 13 20:43:53.665089 systemd[1820]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:43:53.677218 systemd[1820]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:43:53.678667 systemd[1820]: Reached target sockets.target - Sockets. Feb 13 20:43:53.678695 systemd[1820]: Reached target basic.target - Basic System. Feb 13 20:43:53.678746 systemd[1820]: Reached target default.target - Main User Target. Feb 13 20:43:53.678782 systemd[1820]: Startup finished in 267ms. Feb 13 20:43:53.678867 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:43:53.684471 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:43:53.685586 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:43:53.794941 kubelet[1808]: E0213 20:43:53.794784 1808 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:43:53.797622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:43:53.797796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:43:53.798349 systemd[1]: kubelet.service: Consumed 1.010s CPU time. Feb 13 20:43:54.207523 waagent[1789]: 2025-02-13T20:43:54.207415Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 20:43:54.226474 waagent[1789]: 2025-02-13T20:43:54.207973Z INFO Daemon Daemon OS: flatcar 4081.3.1 Feb 13 20:43:54.226474 waagent[1789]: 2025-02-13T20:43:54.209080Z INFO Daemon Daemon Python: 3.11.9 Feb 13 20:43:54.226474 waagent[1789]: 2025-02-13T20:43:54.209731Z INFO Daemon Daemon Run daemon Feb 13 20:43:54.226474 waagent[1789]: 2025-02-13T20:43:54.210625Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.1' Feb 13 20:43:54.226474 waagent[1789]: 2025-02-13T20:43:54.211519Z INFO Daemon Daemon Using waagent for provisioning Feb 13 20:43:54.226474 waagent[1789]: 2025-02-13T20:43:54.212603Z INFO Daemon Daemon Activate resource disk Feb 13 20:43:54.226474 waagent[1789]: 2025-02-13T20:43:54.212984Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 20:43:54.226474 waagent[1789]: 2025-02-13T20:43:54.216930Z INFO Daemon Daemon Found device: None Feb 13 20:43:54.226474 waagent[1789]: 2025-02-13T20:43:54.217329Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 20:43:54.226474 waagent[1789]: 2025-02-13T20:43:54.217753Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 20:43:54.226474 waagent[1789]: 2025-02-13T20:43:54.220157Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 20:43:54.226474 waagent[1789]: 2025-02-13T20:43:54.221191Z INFO Daemon Daemon Running default provisioning handler Feb 13 20:43:54.257093 waagent[1789]: 2025-02-13T20:43:54.257005Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 20:43:54.263767 waagent[1789]: 2025-02-13T20:43:54.263690Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 20:43:54.269829 waagent[1789]: 2025-02-13T20:43:54.267933Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 20:43:54.269829 waagent[1789]: 2025-02-13T20:43:54.268166Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 20:43:54.390293 waagent[1789]: 2025-02-13T20:43:54.390194Z INFO Daemon Daemon Successfully mounted dvd Feb 13 20:43:54.403050 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 20:43:54.404475 waagent[1789]: 2025-02-13T20:43:54.404397Z INFO Daemon Daemon Detect protocol endpoint Feb 13 20:43:54.404893 waagent[1789]: 2025-02-13T20:43:54.404845Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 20:43:54.405743 waagent[1789]: 2025-02-13T20:43:54.405704Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 20:43:54.406504 waagent[1789]: 2025-02-13T20:43:54.406465Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 20:43:54.407397 waagent[1789]: 2025-02-13T20:43:54.407357Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 20:43:54.408070 waagent[1789]: 2025-02-13T20:43:54.408035Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 20:43:54.432239 waagent[1789]: 2025-02-13T20:43:54.432191Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 20:43:54.439389 waagent[1789]: 2025-02-13T20:43:54.432580Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 20:43:54.439389 waagent[1789]: 2025-02-13T20:43:54.433720Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 20:43:54.501043 waagent[1789]: 2025-02-13T20:43:54.500899Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 20:43:54.505022 waagent[1789]: 2025-02-13T20:43:54.504877Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 20:43:54.508385 waagent[1789]: 2025-02-13T20:43:54.508332Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 20:43:54.521624 waagent[1789]: 2025-02-13T20:43:54.521570Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 20:43:54.536076 waagent[1789]: 2025-02-13T20:43:54.522168Z INFO Daemon Feb 13 20:43:54.536076 waagent[1789]: 2025-02-13T20:43:54.523162Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 0780a56a-9afe-49ca-8218-67302bf09e2c eTag: 6922409034309739837 source: Fabric] Feb 13 20:43:54.536076 waagent[1789]: 2025-02-13T20:43:54.524032Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 20:43:54.536076 waagent[1789]: 2025-02-13T20:43:54.525045Z INFO Daemon Feb 13 20:43:54.536076 waagent[1789]: 2025-02-13T20:43:54.525815Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 20:43:54.538858 waagent[1789]: 2025-02-13T20:43:54.538814Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 20:43:54.663818 waagent[1789]: 2025-02-13T20:43:54.663741Z INFO Daemon Downloaded certificate {'thumbprint': 'C952D0FF5A85E497858D136A0A988527BE6E8C56', 'hasPrivateKey': True} Feb 13 20:43:54.669213 waagent[1789]: 2025-02-13T20:43:54.669153Z INFO Daemon Downloaded certificate {'thumbprint': 'BE9D82F67CE333400F72D950F47671FC585D042E', 'hasPrivateKey': False} Feb 13 20:43:54.673647 waagent[1789]: 2025-02-13T20:43:54.673587Z INFO Daemon Fetch goal state completed Feb 13 20:43:54.709904 waagent[1789]: 2025-02-13T20:43:54.709825Z INFO Daemon Daemon Starting provisioning Feb 13 20:43:54.713190 waagent[1789]: 2025-02-13T20:43:54.713057Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 20:43:54.718452 waagent[1789]: 2025-02-13T20:43:54.713294Z INFO Daemon Daemon Set hostname [ci-4081.3.1-a-d679334e6e] Feb 13 20:43:54.742481 waagent[1789]: 2025-02-13T20:43:54.742393Z INFO Daemon Daemon Publish hostname [ci-4081.3.1-a-d679334e6e] Feb 13 20:43:54.750118 waagent[1789]: 2025-02-13T20:43:54.743004Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 20:43:54.750118 waagent[1789]: 2025-02-13T20:43:54.743830Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 20:43:54.768656 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:43:54.768666 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:43:54.768712 systemd-networkd[1448]: eth0: DHCP lease lost Feb 13 20:43:54.769940 waagent[1789]: 2025-02-13T20:43:54.769865Z INFO Daemon Daemon Create user account if not exists Feb 13 20:43:54.785421 waagent[1789]: 2025-02-13T20:43:54.770275Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 20:43:54.785421 waagent[1789]: 2025-02-13T20:43:54.772132Z INFO Daemon Daemon Configure sudoer Feb 13 20:43:54.785421 waagent[1789]: 2025-02-13T20:43:54.773263Z INFO Daemon Daemon Configure sshd Feb 13 20:43:54.785421 waagent[1789]: 2025-02-13T20:43:54.773968Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 20:43:54.785421 waagent[1789]: 2025-02-13T20:43:54.774592Z INFO Daemon Daemon Deploy ssh public key. Feb 13 20:43:54.787044 systemd-networkd[1448]: eth0: DHCPv6 lease lost Feb 13 20:43:54.827072 systemd-networkd[1448]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 20:43:55.908750 waagent[1789]: 2025-02-13T20:43:55.908639Z INFO Daemon Daemon Provisioning complete Feb 13 20:43:55.922067 waagent[1789]: 2025-02-13T20:43:55.922010Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 20:43:55.929237 waagent[1789]: 2025-02-13T20:43:55.922352Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 20:43:55.929237 waagent[1789]: 2025-02-13T20:43:55.923290Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 20:43:56.046689 waagent[1877]: 2025-02-13T20:43:56.046581Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 20:43:56.047102 waagent[1877]: 2025-02-13T20:43:56.046754Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.1 Feb 13 20:43:56.047102 waagent[1877]: 2025-02-13T20:43:56.046837Z INFO ExtHandler ExtHandler Python: 3.11.9 Feb 13 20:43:56.080539 waagent[1877]: 2025-02-13T20:43:56.080438Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 20:43:56.080757 waagent[1877]: 2025-02-13T20:43:56.080709Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 20:43:56.080850 waagent[1877]: 2025-02-13T20:43:56.080809Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 20:43:56.088387 waagent[1877]: 2025-02-13T20:43:56.088320Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 20:43:56.093597 waagent[1877]: 2025-02-13T20:43:56.093540Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 20:43:56.094073 waagent[1877]: 2025-02-13T20:43:56.094022Z INFO ExtHandler Feb 13 20:43:56.094167 waagent[1877]: 2025-02-13T20:43:56.094116Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 31b99327-bb83-4bbc-9c72-73bcd2f165a0 eTag: 6922409034309739837 source: Fabric] Feb 13 20:43:56.094474 waagent[1877]: 2025-02-13T20:43:56.094423Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 20:43:56.095027 waagent[1877]: 2025-02-13T20:43:56.094955Z INFO ExtHandler Feb 13 20:43:56.095091 waagent[1877]: 2025-02-13T20:43:56.095059Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 20:43:56.098858 waagent[1877]: 2025-02-13T20:43:56.098820Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 20:43:56.168688 waagent[1877]: 2025-02-13T20:43:56.168544Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C952D0FF5A85E497858D136A0A988527BE6E8C56', 'hasPrivateKey': True} Feb 13 20:43:56.169088 waagent[1877]: 2025-02-13T20:43:56.169036Z INFO ExtHandler Downloaded certificate {'thumbprint': 'BE9D82F67CE333400F72D950F47671FC585D042E', 'hasPrivateKey': False} Feb 13 20:43:56.169522 waagent[1877]: 2025-02-13T20:43:56.169472Z INFO ExtHandler Fetch goal state completed Feb 13 20:43:56.185318 waagent[1877]: 2025-02-13T20:43:56.185255Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1877 Feb 13 20:43:56.185466 waagent[1877]: 2025-02-13T20:43:56.185420Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 20:43:56.187002 waagent[1877]: 2025-02-13T20:43:56.186930Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 20:43:56.187380 waagent[1877]: 2025-02-13T20:43:56.187328Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 20:43:56.243287 waagent[1877]: 2025-02-13T20:43:56.243233Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 20:43:56.243546 waagent[1877]: 2025-02-13T20:43:56.243490Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 20:43:56.251317 waagent[1877]: 2025-02-13T20:43:56.251269Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 20:43:56.258091 systemd[1]: Reloading requested from client PID 1892 ('systemctl') (unit waagent.service)... Feb 13 20:43:56.258107 systemd[1]: Reloading... Feb 13 20:43:56.348048 zram_generator::config[1929]: No configuration found. Feb 13 20:43:56.467378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:43:56.546608 systemd[1]: Reloading finished in 288 ms. Feb 13 20:43:56.579635 waagent[1877]: 2025-02-13T20:43:56.579531Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 20:43:56.587013 systemd[1]: Reloading requested from client PID 1983 ('systemctl') (unit waagent.service)... Feb 13 20:43:56.587027 systemd[1]: Reloading... Feb 13 20:43:56.676984 zram_generator::config[2026]: No configuration found. Feb 13 20:43:56.784035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:43:56.864141 systemd[1]: Reloading finished in 276 ms. Feb 13 20:43:56.889991 waagent[1877]: 2025-02-13T20:43:56.889773Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 20:43:56.890172 waagent[1877]: 2025-02-13T20:43:56.890108Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 20:43:58.009990 waagent[1877]: 2025-02-13T20:43:58.009876Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 20:43:58.010843 waagent[1877]: 2025-02-13T20:43:58.010770Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 20:43:58.013579 waagent[1877]: 2025-02-13T20:43:58.013516Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 20:43:58.014004 waagent[1877]: 2025-02-13T20:43:58.013913Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 20:43:58.014402 waagent[1877]: 2025-02-13T20:43:58.014338Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 20:43:58.014543 waagent[1877]: 2025-02-13T20:43:58.014485Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 20:43:58.014646 waagent[1877]: 2025-02-13T20:43:58.014593Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 20:43:58.015103 waagent[1877]: 2025-02-13T20:43:58.015042Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 20:43:58.015255 waagent[1877]: 2025-02-13T20:43:58.015173Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 20:43:58.015715 waagent[1877]: 2025-02-13T20:43:58.015640Z INFO EnvHandler ExtHandler Configure routes Feb 13 20:43:58.015838 waagent[1877]: 2025-02-13T20:43:58.015784Z INFO EnvHandler ExtHandler Gateway:None Feb 13 20:43:58.015953 waagent[1877]: 2025-02-13T20:43:58.015901Z INFO EnvHandler ExtHandler Routes:None Feb 13 20:43:58.016105 waagent[1877]: 2025-02-13T20:43:58.016044Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 20:43:58.016354 waagent[1877]: 2025-02-13T20:43:58.016295Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 20:43:58.017384 waagent[1877]: 2025-02-13T20:43:58.017336Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 20:43:58.017614 waagent[1877]: 2025-02-13T20:43:58.017541Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 20:43:58.017736 waagent[1877]: 2025-02-13T20:43:58.017692Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 20:43:58.017736 waagent[1877]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 20:43:58.017736 waagent[1877]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 20:43:58.017736 waagent[1877]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 20:43:58.017736 waagent[1877]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 20:43:58.017736 waagent[1877]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 20:43:58.017736 waagent[1877]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 20:43:58.018588 waagent[1877]: 2025-02-13T20:43:58.018535Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 20:43:58.025407 waagent[1877]: 2025-02-13T20:43:58.025366Z INFO ExtHandler ExtHandler Feb 13 20:43:58.025492 waagent[1877]: 2025-02-13T20:43:58.025455Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: bb77fc81-f680-48f9-bf19-6ef60d49524e correlation 9ae14ae0-5283-4cac-a329-bf4956ae7eeb created: 2025-02-13T20:42:49.777908Z] Feb 13 20:43:58.025835 waagent[1877]: 2025-02-13T20:43:58.025788Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 20:43:58.026392 waagent[1877]: 2025-02-13T20:43:58.026340Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Feb 13 20:43:58.055732 waagent[1877]: 2025-02-13T20:43:58.055671Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E25D9A61-CAAA-4368-AAFC-C45CC7E20899;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 20:43:58.073492 waagent[1877]: 2025-02-13T20:43:58.073423Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 20:43:58.073492 waagent[1877]: Executing ['ip', '-a', '-o', 'link']: Feb 13 20:43:58.073492 waagent[1877]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 20:43:58.073492 waagent[1877]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:21:73:cb brd ff:ff:ff:ff:ff:ff Feb 13 20:43:58.073492 waagent[1877]: 3: enP11075s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:21:73:cb brd ff:ff:ff:ff:ff:ff\ altname enP11075p0s2 Feb 13 20:43:58.073492 waagent[1877]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 20:43:58.073492 waagent[1877]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 20:43:58.073492 waagent[1877]: 2: eth0 inet 10.200.8.4/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 20:43:58.073492 waagent[1877]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 20:43:58.073492 waagent[1877]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 20:43:58.073492 waagent[1877]: 2: eth0 inet6 fe80::7e1e:52ff:fe21:73cb/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 20:43:58.073492 waagent[1877]: 3: enP11075s1 inet6 fe80::7e1e:52ff:fe21:73cb/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 20:43:58.159444 waagent[1877]: 2025-02-13T20:43:58.159369Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 20:43:58.159444 waagent[1877]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:43:58.159444 waagent[1877]: pkts bytes target prot opt in out source destination Feb 13 20:43:58.159444 waagent[1877]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:43:58.159444 waagent[1877]: pkts bytes target prot opt in out source destination Feb 13 20:43:58.159444 waagent[1877]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:43:58.159444 waagent[1877]: pkts bytes target prot opt in out source destination Feb 13 20:43:58.159444 waagent[1877]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 20:43:58.159444 waagent[1877]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 20:43:58.159444 waagent[1877]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 20:43:58.162636 waagent[1877]: 2025-02-13T20:43:58.162578Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 20:43:58.162636 waagent[1877]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:43:58.162636 waagent[1877]: pkts bytes target prot opt in out source destination Feb 13 20:43:58.162636 waagent[1877]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:43:58.162636 waagent[1877]: pkts bytes target prot opt in out source destination Feb 13 20:43:58.162636 waagent[1877]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:43:58.162636 waagent[1877]: pkts bytes target prot opt in out source destination Feb 13 20:43:58.162636 waagent[1877]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 20:43:58.162636 waagent[1877]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 20:43:58.162636 waagent[1877]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 20:43:58.163041 waagent[1877]: 2025-02-13T20:43:58.162870Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 20:44:04.048541 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:44:04.054257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:04.158602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:04.163189 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:44:04.197780 kubelet[2113]: E0213 20:44:04.197741 2113 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:44:04.201305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:44:04.201512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:44:14.283023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:44:14.288196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:14.392462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:14.396831 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:44:15.072749 kubelet[2129]: E0213 20:44:15.072693 2129 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:44:15.075029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:44:15.075224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:44:15.297313 chronyd[1662]: Selected source PHC0 Feb 13 20:44:25.283329 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 20:44:25.288198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:25.383663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:25.393284 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:44:25.427579 kubelet[2144]: E0213 20:44:25.427526 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:44:25.429692 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:44:25.429895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:44:27.276285 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:44:27.284443 systemd[1]: Started sshd@0-10.200.8.4:22-10.200.16.10:59846.service - OpenSSH per-connection server daemon (10.200.16.10:59846). Feb 13 20:44:27.941898 sshd[2152]: Accepted publickey for core from 10.200.16.10 port 59846 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:44:27.943677 sshd[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:27.948141 systemd-logind[1667]: New session 3 of user core. Feb 13 20:44:27.953126 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:44:28.487798 systemd[1]: Started sshd@1-10.200.8.4:22-10.200.16.10:59856.service - OpenSSH per-connection server daemon (10.200.16.10:59856). Feb 13 20:44:29.107468 sshd[2157]: Accepted publickey for core from 10.200.16.10 port 59856 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:44:29.109232 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:29.114653 systemd-logind[1667]: New session 4 of user core. Feb 13 20:44:29.124130 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:44:29.552523 sshd[2157]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:29.556727 systemd[1]: sshd@1-10.200.8.4:22-10.200.16.10:59856.service: Deactivated successfully. Feb 13 20:44:29.558475 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:44:29.559259 systemd-logind[1667]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:44:29.560167 systemd-logind[1667]: Removed session 4. Feb 13 20:44:29.661585 systemd[1]: Started sshd@2-10.200.8.4:22-10.200.16.10:48810.service - OpenSSH per-connection server daemon (10.200.16.10:48810). Feb 13 20:44:30.283564 sshd[2164]: Accepted publickey for core from 10.200.16.10 port 48810 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:44:30.302513 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:30.308084 systemd-logind[1667]: New session 5 of user core. Feb 13 20:44:30.316103 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:44:30.724210 sshd[2164]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:30.727924 systemd[1]: sshd@2-10.200.8.4:22-10.200.16.10:48810.service: Deactivated successfully. Feb 13 20:44:30.730061 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:44:30.731392 systemd-logind[1667]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:44:30.732437 systemd-logind[1667]: Removed session 5. Feb 13 20:44:30.845685 systemd[1]: Started sshd@3-10.200.8.4:22-10.200.16.10:48820.service - OpenSSH per-connection server daemon (10.200.16.10:48820). Feb 13 20:44:31.470951 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 48820 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:44:31.472692 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:31.477595 systemd-logind[1667]: New session 6 of user core. Feb 13 20:44:31.488103 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:44:31.917496 sshd[2171]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:31.921149 systemd[1]: sshd@3-10.200.8.4:22-10.200.16.10:48820.service: Deactivated successfully. Feb 13 20:44:31.923517 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:44:31.925360 systemd-logind[1667]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:44:31.926238 systemd-logind[1667]: Removed session 6. Feb 13 20:44:32.032475 systemd[1]: Started sshd@4-10.200.8.4:22-10.200.16.10:48832.service - OpenSSH per-connection server daemon (10.200.16.10:48832). Feb 13 20:44:32.652640 sshd[2178]: Accepted publickey for core from 10.200.16.10 port 48832 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:44:32.654380 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:32.659937 systemd-logind[1667]: New session 7 of user core. Feb 13 20:44:32.669132 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:44:33.063923 sudo[2181]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:44:33.064393 sudo[2181]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:44:33.121654 sudo[2181]: pam_unix(sudo:session): session closed for user root Feb 13 20:44:33.222511 sshd[2178]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:33.226154 systemd[1]: sshd@4-10.200.8.4:22-10.200.16.10:48832.service: Deactivated successfully. Feb 13 20:44:33.228541 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:44:33.230371 systemd-logind[1667]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:44:33.231482 systemd-logind[1667]: Removed session 7. Feb 13 20:44:33.333232 systemd[1]: Started sshd@5-10.200.8.4:22-10.200.16.10:48846.service - OpenSSH per-connection server daemon (10.200.16.10:48846). Feb 13 20:44:33.962816 sshd[2186]: Accepted publickey for core from 10.200.16.10 port 48846 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:44:33.964658 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:33.969480 systemd-logind[1667]: New session 8 of user core. Feb 13 20:44:33.975117 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:44:34.311757 sudo[2190]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:44:34.312255 sudo[2190]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:44:34.315353 sudo[2190]: pam_unix(sudo:session): session closed for user root Feb 13 20:44:34.320175 sudo[2189]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:44:34.320512 sudo[2189]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:44:34.332283 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:44:34.334464 auditctl[2193]: No rules Feb 13 20:44:34.334811 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:44:34.335024 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:44:34.337406 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:44:34.362810 augenrules[2211]: No rules Feb 13 20:44:34.364075 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:44:34.365228 sudo[2189]: pam_unix(sudo:session): session closed for user root Feb 13 20:44:34.465994 sshd[2186]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:34.469323 systemd[1]: sshd@5-10.200.8.4:22-10.200.16.10:48846.service: Deactivated successfully. Feb 13 20:44:34.471504 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:44:34.473500 systemd-logind[1667]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:44:34.474529 systemd-logind[1667]: Removed session 8. Feb 13 20:44:34.575774 systemd[1]: Started sshd@6-10.200.8.4:22-10.200.16.10:48854.service - OpenSSH per-connection server daemon (10.200.16.10:48854). Feb 13 20:44:34.946831 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 13 20:44:35.201343 sshd[2219]: Accepted publickey for core from 10.200.16.10 port 48854 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:44:35.203072 sshd[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:35.208347 systemd-logind[1667]: New session 9 of user core. Feb 13 20:44:35.210113 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:44:35.532873 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 20:44:35.538443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:35.549606 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:44:35.550389 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:44:35.766590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:35.771134 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:44:36.267192 kubelet[2235]: E0213 20:44:36.267139 2235 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:44:36.269374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:44:36.269576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:44:36.539642 update_engine[1669]: I20250213 20:44:36.539583 1669 update_attempter.cc:509] Updating boot flags... Feb 13 20:44:36.608409 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2259) Feb 13 20:44:36.721999 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2260) Feb 13 20:44:36.816998 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2260) Feb 13 20:44:37.517294 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:44:37.519044 (dockerd)[2345]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:44:39.475339 dockerd[2345]: time="2025-02-13T20:44:39.475275968Z" level=info msg="Starting up" Feb 13 20:44:39.933803 dockerd[2345]: time="2025-02-13T20:44:39.933758847Z" level=info msg="Loading containers: start." Feb 13 20:44:40.087020 kernel: Initializing XFRM netlink socket Feb 13 20:44:40.204473 systemd-networkd[1448]: docker0: Link UP Feb 13 20:44:40.276126 dockerd[2345]: time="2025-02-13T20:44:40.276088326Z" level=info msg="Loading containers: done." Feb 13 20:44:40.332639 dockerd[2345]: time="2025-02-13T20:44:40.332593444Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:44:40.332820 dockerd[2345]: time="2025-02-13T20:44:40.332708347Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:44:40.332870 dockerd[2345]: time="2025-02-13T20:44:40.332825349Z" level=info msg="Daemon has completed initialization" Feb 13 20:44:40.377618 dockerd[2345]: time="2025-02-13T20:44:40.376851221Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:44:40.377029 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:44:41.171629 containerd[1690]: time="2025-02-13T20:44:41.171584658Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 20:44:42.124413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254289222.mount: Deactivated successfully. Feb 13 20:44:43.873626 containerd[1690]: time="2025-02-13T20:44:43.873505861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:43.878496 containerd[1690]: time="2025-02-13T20:44:43.878325656Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673939" Feb 13 20:44:43.882587 containerd[1690]: time="2025-02-13T20:44:43.882538940Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:43.888978 containerd[1690]: time="2025-02-13T20:44:43.888899366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:43.890130 containerd[1690]: time="2025-02-13T20:44:43.889941286Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 2.718311927s" Feb 13 20:44:43.890130 containerd[1690]: time="2025-02-13T20:44:43.889994987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 20:44:43.890870 containerd[1690]: time="2025-02-13T20:44:43.890666101Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 20:44:45.772264 containerd[1690]: time="2025-02-13T20:44:45.772201358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:45.774135 containerd[1690]: time="2025-02-13T20:44:45.774064095Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771792" Feb 13 20:44:45.779291 containerd[1690]: time="2025-02-13T20:44:45.779230197Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:45.784761 containerd[1690]: time="2025-02-13T20:44:45.784698606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:45.785908 containerd[1690]: time="2025-02-13T20:44:45.785764727Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.895012425s" Feb 13 20:44:45.785908 containerd[1690]: time="2025-02-13T20:44:45.785804428Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 20:44:45.786731 containerd[1690]: time="2025-02-13T20:44:45.786620744Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 20:44:46.282617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 20:44:46.288183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:46.389652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:46.394279 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:44:46.437598 kubelet[2544]: E0213 20:44:46.437548 2544 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:44:46.439818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:44:46.440068 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:44:47.684617 containerd[1690]: time="2025-02-13T20:44:47.684557626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:47.686584 containerd[1690]: time="2025-02-13T20:44:47.686516173Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170284" Feb 13 20:44:47.691001 containerd[1690]: time="2025-02-13T20:44:47.690908078Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:47.695852 containerd[1690]: time="2025-02-13T20:44:47.695792195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:47.696920 containerd[1690]: time="2025-02-13T20:44:47.696771718Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.910115074s" Feb 13 20:44:47.696920 containerd[1690]: time="2025-02-13T20:44:47.696811719Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 20:44:47.697707 containerd[1690]: time="2025-02-13T20:44:47.697497636Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 20:44:49.104575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1959161093.mount: Deactivated successfully. Feb 13 20:44:49.633943 containerd[1690]: time="2025-02-13T20:44:49.633878722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:49.635716 containerd[1690]: time="2025-02-13T20:44:49.635644064Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908847" Feb 13 20:44:49.638452 containerd[1690]: time="2025-02-13T20:44:49.638387330Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:49.642804 containerd[1690]: time="2025-02-13T20:44:49.642771435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:49.643510 containerd[1690]: time="2025-02-13T20:44:49.643343148Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 1.945682509s" Feb 13 20:44:49.643510 containerd[1690]: time="2025-02-13T20:44:49.643383649Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 20:44:49.644124 containerd[1690]: time="2025-02-13T20:44:49.644093166Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 20:44:50.365441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4182788682.mount: Deactivated successfully. Feb 13 20:44:51.599067 containerd[1690]: time="2025-02-13T20:44:51.599016496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:51.601038 containerd[1690]: time="2025-02-13T20:44:51.600975343Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Feb 13 20:44:51.604881 containerd[1690]: time="2025-02-13T20:44:51.604739533Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:51.608933 containerd[1690]: time="2025-02-13T20:44:51.608872232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:51.610098 containerd[1690]: time="2025-02-13T20:44:51.609933058Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.965807991s" Feb 13 20:44:51.610098 containerd[1690]: time="2025-02-13T20:44:51.609985159Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 20:44:51.610801 containerd[1690]: time="2025-02-13T20:44:51.610773578Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:44:52.166391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297832394.mount: Deactivated successfully. Feb 13 20:44:52.189323 containerd[1690]: time="2025-02-13T20:44:52.189281836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:52.191215 containerd[1690]: time="2025-02-13T20:44:52.191157781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Feb 13 20:44:52.196054 containerd[1690]: time="2025-02-13T20:44:52.196003097Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:52.199208 containerd[1690]: time="2025-02-13T20:44:52.199162573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:52.200429 containerd[1690]: time="2025-02-13T20:44:52.199838189Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 589.02471ms" Feb 13 20:44:52.200429 containerd[1690]: time="2025-02-13T20:44:52.199873990Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 20:44:52.200429 containerd[1690]: time="2025-02-13T20:44:52.200323401Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 20:44:52.865061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3377575065.mount: Deactivated successfully. Feb 13 20:44:55.097007 containerd[1690]: time="2025-02-13T20:44:55.096936662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:55.099890 containerd[1690]: time="2025-02-13T20:44:55.099832016Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551328" Feb 13 20:44:55.104343 containerd[1690]: time="2025-02-13T20:44:55.104260399Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:55.109787 containerd[1690]: time="2025-02-13T20:44:55.109625800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:55.111425 containerd[1690]: time="2025-02-13T20:44:55.111214029Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.910862228s" Feb 13 20:44:55.111425 containerd[1690]: time="2025-02-13T20:44:55.111250330Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 20:44:56.532873 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 20:44:56.543813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:56.676129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:56.682469 (kubelet)[2703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:44:56.739092 kubelet[2703]: E0213 20:44:56.739046 2703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:44:56.742311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:44:56.742498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:44:57.466723 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:57.472261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:57.504712 systemd[1]: Reloading requested from client PID 2717 ('systemctl') (unit session-9.scope)... Feb 13 20:44:57.504732 systemd[1]: Reloading... Feb 13 20:44:57.633984 zram_generator::config[2753]: No configuration found. Feb 13 20:44:57.761317 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:44:57.840610 systemd[1]: Reloading finished in 335 ms. Feb 13 20:44:57.894199 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:44:57.894424 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:44:57.894732 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:57.897275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:58.263903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:58.273293 (kubelet)[2828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:44:58.306784 kubelet[2828]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:44:58.307106 kubelet[2828]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:44:58.307106 kubelet[2828]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:44:58.307187 kubelet[2828]: I0213 20:44:58.307124 2828 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:44:59.002493 kubelet[2828]: I0213 20:44:59.001955 2828 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:44:59.002493 kubelet[2828]: I0213 20:44:59.002031 2828 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:44:59.002493 kubelet[2828]: I0213 20:44:59.002449 2828 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:44:59.055932 kubelet[2828]: E0213 20:44:59.055328 2828 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:44:59.057062 kubelet[2828]: I0213 20:44:59.056649 2828 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:44:59.068423 kubelet[2828]: E0213 20:44:59.068390 2828 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:44:59.068423 kubelet[2828]: I0213 20:44:59.068419 2828 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:44:59.071716 kubelet[2828]: I0213 20:44:59.071683 2828 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:44:59.071967 kubelet[2828]: I0213 20:44:59.071925 2828 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:44:59.072178 kubelet[2828]: I0213 20:44:59.071977 2828 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-d679334e6e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:44:59.072335 kubelet[2828]: I0213 20:44:59.072182 2828 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:44:59.072335 kubelet[2828]: I0213 20:44:59.072196 2828 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:44:59.072335 kubelet[2828]: I0213 20:44:59.072329 2828 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:44:59.075563 kubelet[2828]: I0213 20:44:59.075543 2828 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:44:59.075563 kubelet[2828]: I0213 20:44:59.075566 2828 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:44:59.075690 kubelet[2828]: I0213 20:44:59.075590 2828 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:44:59.075690 kubelet[2828]: I0213 20:44:59.075602 2828 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:44:59.080243 kubelet[2828]: I0213 20:44:59.079562 2828 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:44:59.080243 kubelet[2828]: I0213 20:44:59.080079 2828 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:44:59.081494 kubelet[2828]: W0213 20:44:59.080812 2828 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:44:59.083086 kubelet[2828]: I0213 20:44:59.082785 2828 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:44:59.083086 kubelet[2828]: I0213 20:44:59.082826 2828 server.go:1287] "Started kubelet" Feb 13 20:44:59.083086 kubelet[2828]: W0213 20:44:59.082986 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 13 20:44:59.083086 kubelet[2828]: E0213 20:44:59.083054 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:44:59.090722 kubelet[2828]: I0213 20:44:59.088875 2828 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:44:59.090722 kubelet[2828]: W0213 20:44:59.088864 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-d679334e6e&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 13 20:44:59.090722 kubelet[2828]: E0213 20:44:59.088913 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-d679334e6e&limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:44:59.090722 kubelet[2828]: I0213 20:44:59.088939 2828 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:44:59.090722 kubelet[2828]: I0213 20:44:59.089029 2828 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:44:59.091098 kubelet[2828]: I0213 20:44:59.091055 2828 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:44:59.091457 kubelet[2828]: I0213 20:44:59.091440 2828 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:44:59.092165 kubelet[2828]: I0213 20:44:59.092145 2828 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:44:59.095393 kubelet[2828]: E0213 20:44:59.093917 2828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-a-d679334e6e.1823df6364e7a49d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-d679334e6e,UID:ci-4081.3.1-a-d679334e6e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-d679334e6e,},FirstTimestamp:2025-02-13 20:44:59.082802333 +0000 UTC m=+0.806341657,LastTimestamp:2025-02-13 20:44:59.082802333 +0000 UTC m=+0.806341657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-d679334e6e,}" Feb 13 20:44:59.096027 kubelet[2828]: E0213 20:44:59.095992 2828 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-d679334e6e\" not found" Feb 13 20:44:59.096674 kubelet[2828]: I0213 20:44:59.096658 2828 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:44:59.096988 kubelet[2828]: I0213 20:44:59.096955 2828 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:44:59.097112 kubelet[2828]: I0213 20:44:59.097102 2828 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:44:59.097782 kubelet[2828]: W0213 20:44:59.097746 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 13 20:44:59.097912 kubelet[2828]: E0213 20:44:59.097891 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:44:59.098092 kubelet[2828]: E0213 20:44:59.098067 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-d679334e6e?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="200ms" Feb 13 20:44:59.101136 kubelet[2828]: I0213 20:44:59.101115 2828 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:44:59.101201 kubelet[2828]: I0213 20:44:59.101135 2828 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:44:59.102459 kubelet[2828]: I0213 20:44:59.101242 2828 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:44:59.123222 kubelet[2828]: E0213 20:44:59.123198 2828 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:44:59.196553 kubelet[2828]: I0213 20:44:59.196492 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:44:59.197331 kubelet[2828]: E0213 20:44:59.197294 2828 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-d679334e6e\" not found" Feb 13 20:44:59.200133 kubelet[2828]: I0213 20:44:59.200107 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:44:59.200133 kubelet[2828]: I0213 20:44:59.200135 2828 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:44:59.200260 kubelet[2828]: I0213 20:44:59.200164 2828 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:44:59.200260 kubelet[2828]: I0213 20:44:59.200174 2828 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:44:59.200260 kubelet[2828]: E0213 20:44:59.200224 2828 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:44:59.202046 kubelet[2828]: W0213 20:44:59.201178 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 13 20:44:59.202046 kubelet[2828]: E0213 20:44:59.201249 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:44:59.290654 kubelet[2828]: I0213 20:44:59.290626 2828 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:44:59.290654 kubelet[2828]: I0213 20:44:59.290644 2828 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:44:59.290824 kubelet[2828]: I0213 20:44:59.290675 2828 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:44:59.295681 kubelet[2828]: I0213 20:44:59.295654 2828 policy_none.go:49] "None policy: Start" Feb 13 20:44:59.295681 kubelet[2828]: I0213 20:44:59.295677 2828 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:44:59.295808 kubelet[2828]: I0213 20:44:59.295691 2828 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:44:59.297823 kubelet[2828]: E0213 20:44:59.297796 2828 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-d679334e6e\" not found" Feb 13 20:44:59.299218 kubelet[2828]: E0213 20:44:59.299187 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-d679334e6e?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="400ms" Feb 13 20:44:59.300360 kubelet[2828]: E0213 20:44:59.300323 2828 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:44:59.304473 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:44:59.319953 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:44:59.323315 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:44:59.331195 kubelet[2828]: I0213 20:44:59.330672 2828 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:44:59.331195 kubelet[2828]: I0213 20:44:59.330895 2828 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:44:59.331195 kubelet[2828]: I0213 20:44:59.330908 2828 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:44:59.331195 kubelet[2828]: I0213 20:44:59.331190 2828 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:44:59.333067 kubelet[2828]: E0213 20:44:59.332915 2828 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:44:59.333067 kubelet[2828]: E0213 20:44:59.333016 2828 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-a-d679334e6e\" not found" Feb 13 20:44:59.433418 kubelet[2828]: I0213 20:44:59.433355 2828 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.433798 kubelet[2828]: E0213 20:44:59.433768 2828 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.512628 systemd[1]: Created slice kubepods-burstable-pod49633b76ba0e8478c5dcf57cbb4206b3.slice - libcontainer container kubepods-burstable-pod49633b76ba0e8478c5dcf57cbb4206b3.slice. Feb 13 20:44:59.526064 kubelet[2828]: E0213 20:44:59.525715 2828 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-d679334e6e\" not found" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.526645 systemd[1]: Created slice kubepods-burstable-podac232a3f277b538688d3e11de9e07b91.slice - libcontainer container kubepods-burstable-podac232a3f277b538688d3e11de9e07b91.slice. Feb 13 20:44:59.538091 kubelet[2828]: E0213 20:44:59.538061 2828 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-d679334e6e\" not found" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.540881 systemd[1]: Created slice kubepods-burstable-poda2a3730dfc288f40d5f834b87f1a5ed1.slice - libcontainer container kubepods-burstable-poda2a3730dfc288f40d5f834b87f1a5ed1.slice. Feb 13 20:44:59.543148 kubelet[2828]: E0213 20:44:59.543117 2828 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-d679334e6e\" not found" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.599928 kubelet[2828]: I0213 20:44:59.599803 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49633b76ba0e8478c5dcf57cbb4206b3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-d679334e6e\" (UID: \"49633b76ba0e8478c5dcf57cbb4206b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.599928 kubelet[2828]: I0213 20:44:59.599857 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49633b76ba0e8478c5dcf57cbb4206b3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-d679334e6e\" (UID: \"49633b76ba0e8478c5dcf57cbb4206b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.599928 kubelet[2828]: I0213 20:44:59.599914 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac232a3f277b538688d3e11de9e07b91-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-d679334e6e\" (UID: \"ac232a3f277b538688d3e11de9e07b91\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.600230 kubelet[2828]: I0213 20:44:59.599946 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2a3730dfc288f40d5f834b87f1a5ed1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-d679334e6e\" (UID: \"a2a3730dfc288f40d5f834b87f1a5ed1\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.600230 kubelet[2828]: I0213 20:44:59.599995 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49633b76ba0e8478c5dcf57cbb4206b3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-d679334e6e\" (UID: \"49633b76ba0e8478c5dcf57cbb4206b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.600230 kubelet[2828]: I0213 20:44:59.600020 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49633b76ba0e8478c5dcf57cbb4206b3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-d679334e6e\" (UID: \"49633b76ba0e8478c5dcf57cbb4206b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.600230 kubelet[2828]: I0213 20:44:59.600045 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49633b76ba0e8478c5dcf57cbb4206b3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-d679334e6e\" (UID: \"49633b76ba0e8478c5dcf57cbb4206b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.600230 kubelet[2828]: I0213 20:44:59.600074 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2a3730dfc288f40d5f834b87f1a5ed1-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-d679334e6e\" (UID: \"a2a3730dfc288f40d5f834b87f1a5ed1\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.600396 kubelet[2828]: I0213 20:44:59.600099 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2a3730dfc288f40d5f834b87f1a5ed1-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-d679334e6e\" (UID: \"a2a3730dfc288f40d5f834b87f1a5ed1\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.635942 kubelet[2828]: I0213 20:44:59.635888 2828 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.636342 kubelet[2828]: E0213 20:44:59.636311 2828 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:44:59.700126 kubelet[2828]: E0213 20:44:59.700071 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-d679334e6e?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="800ms" Feb 13 20:44:59.823523 kubelet[2828]: E0213 20:44:59.823323 2828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-a-d679334e6e.1823df6364e7a49d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-d679334e6e,UID:ci-4081.3.1-a-d679334e6e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-d679334e6e,},FirstTimestamp:2025-02-13 20:44:59.082802333 +0000 UTC m=+0.806341657,LastTimestamp:2025-02-13 20:44:59.082802333 +0000 UTC m=+0.806341657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-d679334e6e,}" Feb 13 20:44:59.827416 containerd[1690]: time="2025-02-13T20:44:59.827366241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-d679334e6e,Uid:49633b76ba0e8478c5dcf57cbb4206b3,Namespace:kube-system,Attempt:0,}" Feb 13 20:44:59.838951 containerd[1690]: time="2025-02-13T20:44:59.838915718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-d679334e6e,Uid:ac232a3f277b538688d3e11de9e07b91,Namespace:kube-system,Attempt:0,}" Feb 13 20:44:59.845335 containerd[1690]: time="2025-02-13T20:44:59.845297816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-d679334e6e,Uid:a2a3730dfc288f40d5f834b87f1a5ed1,Namespace:kube-system,Attempt:0,}" Feb 13 20:44:59.983552 kubelet[2828]: W0213 20:44:59.983491 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 13 20:44:59.983705 kubelet[2828]: E0213 20:44:59.983557 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:45:00.038977 kubelet[2828]: I0213 20:45:00.038930 2828 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:00.039399 kubelet[2828]: E0213 20:45:00.039359 2828 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:00.113750 kubelet[2828]: W0213 20:45:00.113630 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 13 20:45:00.113750 kubelet[2828]: E0213 20:45:00.113690 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:45:00.384712 kubelet[2828]: W0213 20:45:00.384572 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-d679334e6e&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 13 20:45:00.384712 kubelet[2828]: E0213 20:45:00.384649 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-d679334e6e&limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:45:00.442940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2847888028.mount: Deactivated successfully. Feb 13 20:45:00.471738 containerd[1690]: time="2025-02-13T20:45:00.471584612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:45:00.474904 containerd[1690]: time="2025-02-13T20:45:00.474867363Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:45:00.477833 containerd[1690]: time="2025-02-13T20:45:00.477617805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 20:45:00.481696 containerd[1690]: time="2025-02-13T20:45:00.481563165Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:45:00.486152 containerd[1690]: time="2025-02-13T20:45:00.486116135Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:45:00.490986 containerd[1690]: time="2025-02-13T20:45:00.490939509Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:45:00.492715 containerd[1690]: time="2025-02-13T20:45:00.492448832Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:45:00.496976 containerd[1690]: time="2025-02-13T20:45:00.496930301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:45:00.497818 containerd[1690]: time="2025-02-13T20:45:00.497784314Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 652.409597ms" Feb 13 20:45:00.499494 containerd[1690]: time="2025-02-13T20:45:00.499458439Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 660.46102ms" Feb 13 20:45:00.500947 kubelet[2828]: E0213 20:45:00.500914 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-d679334e6e?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="1.6s" Feb 13 20:45:00.502451 kubelet[2828]: W0213 20:45:00.502360 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 13 20:45:00.502451 kubelet[2828]: E0213 20:45:00.502406 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:45:00.502583 containerd[1690]: time="2025-02-13T20:45:00.502509486Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 675.043043ms" Feb 13 20:45:00.843264 kubelet[2828]: I0213 20:45:00.843170 2828 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:00.844975 kubelet[2828]: E0213 20:45:00.843551 2828 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:01.095869 kubelet[2828]: E0213 20:45:01.095717 2828 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:45:01.193359 containerd[1690]: time="2025-02-13T20:45:01.192188554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:01.193359 containerd[1690]: time="2025-02-13T20:45:01.192275455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:01.193359 containerd[1690]: time="2025-02-13T20:45:01.192305456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:01.193359 containerd[1690]: time="2025-02-13T20:45:01.192422157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:01.197711 containerd[1690]: time="2025-02-13T20:45:01.197396834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:01.199074 containerd[1690]: time="2025-02-13T20:45:01.198688953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:01.199074 containerd[1690]: time="2025-02-13T20:45:01.198749354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:01.199074 containerd[1690]: time="2025-02-13T20:45:01.198934657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:01.204119 containerd[1690]: time="2025-02-13T20:45:01.203797832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:01.204119 containerd[1690]: time="2025-02-13T20:45:01.203850732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:01.204119 containerd[1690]: time="2025-02-13T20:45:01.203870733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:01.204119 containerd[1690]: time="2025-02-13T20:45:01.203980734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:01.242138 systemd[1]: Started cri-containerd-aa12fe56784a45895b765d094801c49485ca7ac7bd82cd608a07daf6b5d18f4f.scope - libcontainer container aa12fe56784a45895b765d094801c49485ca7ac7bd82cd608a07daf6b5d18f4f. Feb 13 20:45:01.247930 systemd[1]: Started cri-containerd-23172e93eb7d15c7212905b458555aeac91e32aa8f628f35fcda120131a9cdd2.scope - libcontainer container 23172e93eb7d15c7212905b458555aeac91e32aa8f628f35fcda120131a9cdd2. Feb 13 20:45:01.250705 systemd[1]: Started cri-containerd-d28ccf74662d9a6e4ee049204b9cee3bdb8ac489b4f7a9308d51e16170e95e03.scope - libcontainer container d28ccf74662d9a6e4ee049204b9cee3bdb8ac489b4f7a9308d51e16170e95e03. Feb 13 20:45:01.306610 containerd[1690]: time="2025-02-13T20:45:01.306565306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-d679334e6e,Uid:a2a3730dfc288f40d5f834b87f1a5ed1,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa12fe56784a45895b765d094801c49485ca7ac7bd82cd608a07daf6b5d18f4f\"" Feb 13 20:45:01.312669 containerd[1690]: time="2025-02-13T20:45:01.312629299Z" level=info msg="CreateContainer within sandbox \"aa12fe56784a45895b765d094801c49485ca7ac7bd82cd608a07daf6b5d18f4f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:45:01.336037 containerd[1690]: time="2025-02-13T20:45:01.335839155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-d679334e6e,Uid:49633b76ba0e8478c5dcf57cbb4206b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"23172e93eb7d15c7212905b458555aeac91e32aa8f628f35fcda120131a9cdd2\"" Feb 13 20:45:01.339978 containerd[1690]: time="2025-02-13T20:45:01.339581512Z" level=info msg="CreateContainer within sandbox \"23172e93eb7d15c7212905b458555aeac91e32aa8f628f35fcda120131a9cdd2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:45:01.343979 containerd[1690]: time="2025-02-13T20:45:01.343921879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-d679334e6e,Uid:ac232a3f277b538688d3e11de9e07b91,Namespace:kube-system,Attempt:0,} returns sandbox id \"d28ccf74662d9a6e4ee049204b9cee3bdb8ac489b4f7a9308d51e16170e95e03\"" Feb 13 20:45:01.346402 containerd[1690]: time="2025-02-13T20:45:01.346323415Z" level=info msg="CreateContainer within sandbox \"d28ccf74662d9a6e4ee049204b9cee3bdb8ac489b4f7a9308d51e16170e95e03\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:45:01.381475 containerd[1690]: time="2025-02-13T20:45:01.381425953Z" level=info msg="CreateContainer within sandbox \"aa12fe56784a45895b765d094801c49485ca7ac7bd82cd608a07daf6b5d18f4f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"83012b7e3cf8af0f9c2d6fcd2bf73f9a3cb4fef2af5769fac63aa3af5c1de6c5\"" Feb 13 20:45:01.382145 containerd[1690]: time="2025-02-13T20:45:01.382113064Z" level=info msg="StartContainer for \"83012b7e3cf8af0f9c2d6fcd2bf73f9a3cb4fef2af5769fac63aa3af5c1de6c5\"" Feb 13 20:45:01.398209 containerd[1690]: time="2025-02-13T20:45:01.398166610Z" level=info msg="CreateContainer within sandbox \"23172e93eb7d15c7212905b458555aeac91e32aa8f628f35fcda120131a9cdd2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bab2c0b04cf751e3b2775440cc84ba4d4eaa58c989e112f7c8d55b5a6bb0daa9\"" Feb 13 20:45:01.399171 containerd[1690]: time="2025-02-13T20:45:01.398947722Z" level=info msg="StartContainer for \"bab2c0b04cf751e3b2775440cc84ba4d4eaa58c989e112f7c8d55b5a6bb0daa9\"" Feb 13 20:45:01.411167 systemd[1]: Started cri-containerd-83012b7e3cf8af0f9c2d6fcd2bf73f9a3cb4fef2af5769fac63aa3af5c1de6c5.scope - libcontainer container 83012b7e3cf8af0f9c2d6fcd2bf73f9a3cb4fef2af5769fac63aa3af5c1de6c5. Feb 13 20:45:01.417019 containerd[1690]: time="2025-02-13T20:45:01.416210386Z" level=info msg="CreateContainer within sandbox \"d28ccf74662d9a6e4ee049204b9cee3bdb8ac489b4f7a9308d51e16170e95e03\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c43c592d41b2e5e15a410f319a6545ac79de40ea357f888e962fb0c7a180fd95\"" Feb 13 20:45:01.418503 containerd[1690]: time="2025-02-13T20:45:01.418466821Z" level=info msg="StartContainer for \"c43c592d41b2e5e15a410f319a6545ac79de40ea357f888e962fb0c7a180fd95\"" Feb 13 20:45:01.468132 systemd[1]: Started cri-containerd-bab2c0b04cf751e3b2775440cc84ba4d4eaa58c989e112f7c8d55b5a6bb0daa9.scope - libcontainer container bab2c0b04cf751e3b2775440cc84ba4d4eaa58c989e112f7c8d55b5a6bb0daa9. Feb 13 20:45:01.478144 systemd[1]: Started cri-containerd-c43c592d41b2e5e15a410f319a6545ac79de40ea357f888e962fb0c7a180fd95.scope - libcontainer container c43c592d41b2e5e15a410f319a6545ac79de40ea357f888e962fb0c7a180fd95. Feb 13 20:45:01.510701 containerd[1690]: time="2025-02-13T20:45:01.510657033Z" level=info msg="StartContainer for \"83012b7e3cf8af0f9c2d6fcd2bf73f9a3cb4fef2af5769fac63aa3af5c1de6c5\" returns successfully" Feb 13 20:45:01.545026 containerd[1690]: time="2025-02-13T20:45:01.544936259Z" level=info msg="StartContainer for \"bab2c0b04cf751e3b2775440cc84ba4d4eaa58c989e112f7c8d55b5a6bb0daa9\" returns successfully" Feb 13 20:45:01.624392 containerd[1690]: time="2025-02-13T20:45:01.624252774Z" level=info msg="StartContainer for \"c43c592d41b2e5e15a410f319a6545ac79de40ea357f888e962fb0c7a180fd95\" returns successfully" Feb 13 20:45:02.225731 kubelet[2828]: E0213 20:45:02.225694 2828 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-d679334e6e\" not found" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:02.240881 kubelet[2828]: E0213 20:45:02.240857 2828 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-d679334e6e\" not found" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:02.241235 kubelet[2828]: E0213 20:45:02.241209 2828 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-d679334e6e\" not found" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:02.446228 kubelet[2828]: I0213 20:45:02.446194 2828 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:03.243732 kubelet[2828]: E0213 20:45:03.243699 2828 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-d679334e6e\" not found" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:03.244543 kubelet[2828]: E0213 20:45:03.244521 2828 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-d679334e6e\" not found" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:03.461786 kubelet[2828]: E0213 20:45:03.461736 2828 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.1-a-d679334e6e\" not found" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:03.588253 kubelet[2828]: I0213 20:45:03.588048 2828 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:03.588253 kubelet[2828]: E0213 20:45:03.588098 2828 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4081.3.1-a-d679334e6e\": node \"ci-4081.3.1-a-d679334e6e\" not found" Feb 13 20:45:03.598095 kubelet[2828]: I0213 20:45:03.597765 2828 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:03.607342 kubelet[2828]: E0213 20:45:03.607315 2828 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.1-a-d679334e6e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:03.607342 kubelet[2828]: I0213 20:45:03.607342 2828 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:03.609495 kubelet[2828]: E0213 20:45:03.609470 2828 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.1-a-d679334e6e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:03.609495 kubelet[2828]: I0213 20:45:03.609492 2828 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:03.611095 kubelet[2828]: E0213 20:45:03.611065 2828 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.1-a-d679334e6e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:04.080364 kubelet[2828]: I0213 20:45:04.080273 2828 apiserver.go:52] "Watching apiserver" Feb 13 20:45:04.098181 kubelet[2828]: I0213 20:45:04.098141 2828 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:45:04.243247 kubelet[2828]: I0213 20:45:04.243206 2828 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:04.245786 kubelet[2828]: E0213 20:45:04.245755 2828 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.1-a-d679334e6e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:04.576063 kubelet[2828]: I0213 20:45:04.576027 2828 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:04.585322 kubelet[2828]: W0213 20:45:04.585027 2828 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:45:05.487321 systemd[1]: Reloading requested from client PID 3105 ('systemctl') (unit session-9.scope)... Feb 13 20:45:05.487336 systemd[1]: Reloading... Feb 13 20:45:05.574005 zram_generator::config[3145]: No configuration found. Feb 13 20:45:05.700853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:45:05.792773 systemd[1]: Reloading finished in 304 ms. Feb 13 20:45:05.835985 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:45:05.860385 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:45:05.860656 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:45:05.867213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:45:05.981567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:45:05.987978 (kubelet)[3212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:45:06.577000 kubelet[3212]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:45:06.577000 kubelet[3212]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:45:06.577000 kubelet[3212]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:45:06.577000 kubelet[3212]: I0213 20:45:06.576746 3212 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:45:06.586107 kubelet[3212]: I0213 20:45:06.585011 3212 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:45:06.586107 kubelet[3212]: I0213 20:45:06.585036 3212 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:45:06.586107 kubelet[3212]: I0213 20:45:06.585318 3212 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:45:06.588101 kubelet[3212]: I0213 20:45:06.587450 3212 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:45:06.590171 kubelet[3212]: I0213 20:45:06.589995 3212 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:45:06.593558 kubelet[3212]: E0213 20:45:06.593528 3212 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:45:06.593657 kubelet[3212]: I0213 20:45:06.593560 3212 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:45:06.600721 kubelet[3212]: I0213 20:45:06.600600 3212 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:45:06.601548 kubelet[3212]: I0213 20:45:06.601476 3212 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:45:06.601877 kubelet[3212]: I0213 20:45:06.601516 3212 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-d679334e6e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:45:06.602170 kubelet[3212]: I0213 20:45:06.602054 3212 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:45:06.602170 kubelet[3212]: I0213 20:45:06.602071 3212 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:45:06.602302 kubelet[3212]: I0213 20:45:06.602293 3212 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:45:06.602465 kubelet[3212]: I0213 20:45:06.602449 3212 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:45:06.602532 kubelet[3212]: I0213 20:45:06.602467 3212 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:45:06.602532 kubelet[3212]: I0213 20:45:06.602526 3212 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:45:06.602607 kubelet[3212]: I0213 20:45:06.602538 3212 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:45:06.606219 kubelet[3212]: I0213 20:45:06.606027 3212 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:45:06.606527 kubelet[3212]: I0213 20:45:06.606507 3212 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:45:06.609003 kubelet[3212]: I0213 20:45:06.608981 3212 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:45:06.609080 kubelet[3212]: I0213 20:45:06.609024 3212 server.go:1287] "Started kubelet" Feb 13 20:45:06.612678 kubelet[3212]: I0213 20:45:06.612280 3212 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:45:06.612941 kubelet[3212]: I0213 20:45:06.612927 3212 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:45:06.613062 kubelet[3212]: I0213 20:45:06.612333 3212 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:45:06.625641 kubelet[3212]: I0213 20:45:06.625612 3212 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:45:06.627495 kubelet[3212]: I0213 20:45:06.626757 3212 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:45:06.631831 kubelet[3212]: I0213 20:45:06.613069 3212 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:45:06.632652 kubelet[3212]: I0213 20:45:06.629088 3212 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:45:06.632652 kubelet[3212]: I0213 20:45:06.629101 3212 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:45:06.632652 kubelet[3212]: E0213 20:45:06.629250 3212 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-d679334e6e\" not found" Feb 13 20:45:06.633558 kubelet[3212]: I0213 20:45:06.633284 3212 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:45:06.639473 kubelet[3212]: I0213 20:45:06.639444 3212 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:45:06.639570 kubelet[3212]: I0213 20:45:06.639548 3212 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:45:06.646238 kubelet[3212]: E0213 20:45:06.646121 3212 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:45:06.647168 kubelet[3212]: I0213 20:45:06.647131 3212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:45:06.650604 kubelet[3212]: I0213 20:45:06.649600 3212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:45:06.650604 kubelet[3212]: I0213 20:45:06.649627 3212 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:45:06.650604 kubelet[3212]: I0213 20:45:06.649646 3212 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:45:06.650604 kubelet[3212]: I0213 20:45:06.649655 3212 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:45:06.650604 kubelet[3212]: E0213 20:45:06.649700 3212 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:45:06.652762 kubelet[3212]: I0213 20:45:06.652744 3212 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:45:06.707939 kubelet[3212]: I0213 20:45:06.707906 3212 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:45:06.707939 kubelet[3212]: I0213 20:45:06.707928 3212 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:45:06.707939 kubelet[3212]: I0213 20:45:06.707952 3212 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:45:06.708703 kubelet[3212]: I0213 20:45:06.708200 3212 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:45:06.708703 kubelet[3212]: I0213 20:45:06.708214 3212 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:45:06.708703 kubelet[3212]: I0213 20:45:06.708237 3212 policy_none.go:49] "None policy: Start" Feb 13 20:45:06.708703 kubelet[3212]: I0213 20:45:06.708249 3212 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:45:06.708703 kubelet[3212]: I0213 20:45:06.708261 3212 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:45:06.708703 kubelet[3212]: I0213 20:45:06.708394 3212 state_mem.go:75] "Updated machine memory state" Feb 13 20:45:06.713310 kubelet[3212]: I0213 20:45:06.713288 3212 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:45:06.713907 kubelet[3212]: I0213 20:45:06.713678 3212 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:45:06.713907 kubelet[3212]: I0213 20:45:06.713696 3212 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:45:06.714484 kubelet[3212]: I0213 20:45:06.714210 3212 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:45:06.716835 kubelet[3212]: E0213 20:45:06.716463 3212 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:45:06.751121 kubelet[3212]: I0213 20:45:06.750473 3212 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.751121 kubelet[3212]: I0213 20:45:06.750500 3212 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.751786 kubelet[3212]: I0213 20:45:06.751583 3212 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.759456 kubelet[3212]: W0213 20:45:06.759424 3212 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:45:06.763590 kubelet[3212]: W0213 20:45:06.763557 3212 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:45:06.763709 kubelet[3212]: E0213 20:45:06.763638 3212 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.1-a-d679334e6e\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.763709 kubelet[3212]: W0213 20:45:06.763575 3212 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:45:06.818231 kubelet[3212]: I0213 20:45:06.818164 3212 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.829294 kubelet[3212]: I0213 20:45:06.829194 3212 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.829731 kubelet[3212]: I0213 20:45:06.829494 3212 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.834527 kubelet[3212]: I0213 20:45:06.834352 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2a3730dfc288f40d5f834b87f1a5ed1-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-d679334e6e\" (UID: \"a2a3730dfc288f40d5f834b87f1a5ed1\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.834527 kubelet[3212]: I0213 20:45:06.834474 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2a3730dfc288f40d5f834b87f1a5ed1-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-d679334e6e\" (UID: \"a2a3730dfc288f40d5f834b87f1a5ed1\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.834836 kubelet[3212]: I0213 20:45:06.834502 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49633b76ba0e8478c5dcf57cbb4206b3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-d679334e6e\" (UID: \"49633b76ba0e8478c5dcf57cbb4206b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.834836 kubelet[3212]: I0213 20:45:06.834741 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac232a3f277b538688d3e11de9e07b91-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-d679334e6e\" (UID: \"ac232a3f277b538688d3e11de9e07b91\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.834836 kubelet[3212]: I0213 20:45:06.834770 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49633b76ba0e8478c5dcf57cbb4206b3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-d679334e6e\" (UID: \"49633b76ba0e8478c5dcf57cbb4206b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.835190 kubelet[3212]: I0213 20:45:06.835034 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49633b76ba0e8478c5dcf57cbb4206b3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-d679334e6e\" (UID: \"49633b76ba0e8478c5dcf57cbb4206b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.835190 kubelet[3212]: I0213 20:45:06.835067 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2a3730dfc288f40d5f834b87f1a5ed1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-d679334e6e\" (UID: \"a2a3730dfc288f40d5f834b87f1a5ed1\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.835190 kubelet[3212]: I0213 20:45:06.835113 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49633b76ba0e8478c5dcf57cbb4206b3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-d679334e6e\" (UID: \"49633b76ba0e8478c5dcf57cbb4206b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:06.835375 kubelet[3212]: I0213 20:45:06.835137 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49633b76ba0e8478c5dcf57cbb4206b3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-d679334e6e\" (UID: \"49633b76ba0e8478c5dcf57cbb4206b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:07.603063 kubelet[3212]: I0213 20:45:07.602920 3212 apiserver.go:52] "Watching apiserver" Feb 13 20:45:07.633981 kubelet[3212]: I0213 20:45:07.633530 3212 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:45:07.688648 kubelet[3212]: I0213 20:45:07.688611 3212 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:07.689474 kubelet[3212]: I0213 20:45:07.689442 3212 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:07.728692 kubelet[3212]: W0213 20:45:07.728651 3212 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:45:07.728840 kubelet[3212]: E0213 20:45:07.728720 3212 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.1-a-d679334e6e\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:07.729118 kubelet[3212]: W0213 20:45:07.728946 3212 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:45:07.729236 kubelet[3212]: E0213 20:45:07.729216 3212 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.1-a-d679334e6e\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.1-a-d679334e6e" Feb 13 20:45:07.802688 kubelet[3212]: I0213 20:45:07.802449 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d679334e6e" podStartSLOduration=3.802426947 podStartE2EDuration="3.802426947s" podCreationTimestamp="2025-02-13 20:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:07.783686588 +0000 UTC m=+1.790838658" watchObservedRunningTime="2025-02-13 20:45:07.802426947 +0000 UTC m=+1.809578917" Feb 13 20:45:07.839639 kubelet[3212]: I0213 20:45:07.839408 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.1-a-d679334e6e" podStartSLOduration=1.8393862539999999 podStartE2EDuration="1.839386254s" podCreationTimestamp="2025-02-13 20:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:07.803705871 +0000 UTC m=+1.810857841" watchObservedRunningTime="2025-02-13 20:45:07.839386254 +0000 UTC m=+1.846538324" Feb 13 20:45:07.857252 kubelet[3212]: I0213 20:45:07.856888 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-a-d679334e6e" podStartSLOduration=1.856869388 podStartE2EDuration="1.856869388s" podCreationTimestamp="2025-02-13 20:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:07.840077367 +0000 UTC m=+1.847229337" watchObservedRunningTime="2025-02-13 20:45:07.856869388 +0000 UTC m=+1.864021458" Feb 13 20:45:10.877827 sudo[2223]: pam_unix(sudo:session): session closed for user root Feb 13 20:45:10.979403 sshd[2219]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:10.982813 systemd[1]: sshd@6-10.200.8.4:22-10.200.16.10:48854.service: Deactivated successfully. Feb 13 20:45:10.985057 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:45:10.985286 systemd[1]: session-9.scope: Consumed 4.113s CPU time, 156.8M memory peak, 0B memory swap peak. Feb 13 20:45:10.986698 systemd-logind[1667]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:45:10.988106 systemd-logind[1667]: Removed session 9. Feb 13 20:45:11.828666 kubelet[3212]: I0213 20:45:11.828630 3212 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:45:11.831437 kubelet[3212]: I0213 20:45:11.829651 3212 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:45:11.831547 containerd[1690]: time="2025-02-13T20:45:11.829362139Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:45:12.840017 systemd[1]: Created slice kubepods-besteffort-poddf93e12c_6166_4412_855e_360102c8b636.slice - libcontainer container kubepods-besteffort-poddf93e12c_6166_4412_855e_360102c8b636.slice. Feb 13 20:45:12.872908 kubelet[3212]: I0213 20:45:12.872733 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/df93e12c-6166-4412-855e-360102c8b636-kube-proxy\") pod \"kube-proxy-spttg\" (UID: \"df93e12c-6166-4412-855e-360102c8b636\") " pod="kube-system/kube-proxy-spttg" Feb 13 20:45:12.872908 kubelet[3212]: I0213 20:45:12.872779 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df93e12c-6166-4412-855e-360102c8b636-xtables-lock\") pod \"kube-proxy-spttg\" (UID: \"df93e12c-6166-4412-855e-360102c8b636\") " pod="kube-system/kube-proxy-spttg" Feb 13 20:45:12.872908 kubelet[3212]: I0213 20:45:12.872803 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mtgf\" (UniqueName: \"kubernetes.io/projected/df93e12c-6166-4412-855e-360102c8b636-kube-api-access-2mtgf\") pod \"kube-proxy-spttg\" (UID: \"df93e12c-6166-4412-855e-360102c8b636\") " pod="kube-system/kube-proxy-spttg" Feb 13 20:45:12.872908 kubelet[3212]: I0213 20:45:12.872828 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df93e12c-6166-4412-855e-360102c8b636-lib-modules\") pod \"kube-proxy-spttg\" (UID: \"df93e12c-6166-4412-855e-360102c8b636\") " pod="kube-system/kube-proxy-spttg" Feb 13 20:45:12.989233 systemd[1]: Created slice kubepods-besteffort-pod1edd1e70_a3e0_4c98_bdaf_820922abe4d3.slice - libcontainer container kubepods-besteffort-pod1edd1e70_a3e0_4c98_bdaf_820922abe4d3.slice. Feb 13 20:45:13.074596 kubelet[3212]: I0213 20:45:13.074398 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1edd1e70-a3e0-4c98-bdaf-820922abe4d3-var-lib-calico\") pod \"tigera-operator-7d68577dc5-kszz2\" (UID: \"1edd1e70-a3e0-4c98-bdaf-820922abe4d3\") " pod="tigera-operator/tigera-operator-7d68577dc5-kszz2" Feb 13 20:45:13.074596 kubelet[3212]: I0213 20:45:13.074523 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psxnm\" (UniqueName: \"kubernetes.io/projected/1edd1e70-a3e0-4c98-bdaf-820922abe4d3-kube-api-access-psxnm\") pod \"tigera-operator-7d68577dc5-kszz2\" (UID: \"1edd1e70-a3e0-4c98-bdaf-820922abe4d3\") " pod="tigera-operator/tigera-operator-7d68577dc5-kszz2" Feb 13 20:45:13.152690 containerd[1690]: time="2025-02-13T20:45:13.152548423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-spttg,Uid:df93e12c-6166-4412-855e-360102c8b636,Namespace:kube-system,Attempt:0,}" Feb 13 20:45:13.199697 containerd[1690]: time="2025-02-13T20:45:13.199616130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:13.199929 containerd[1690]: time="2025-02-13T20:45:13.199730532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:13.199929 containerd[1690]: time="2025-02-13T20:45:13.199766933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:13.199929 containerd[1690]: time="2025-02-13T20:45:13.199860935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:13.227693 systemd[1]: Started cri-containerd-cd0c3122b490527abed758ca7e00074431c05b32bd900eaad211c1957ed86185.scope - libcontainer container cd0c3122b490527abed758ca7e00074431c05b32bd900eaad211c1957ed86185. Feb 13 20:45:13.249090 containerd[1690]: time="2025-02-13T20:45:13.248941280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-spttg,Uid:df93e12c-6166-4412-855e-360102c8b636,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd0c3122b490527abed758ca7e00074431c05b32bd900eaad211c1957ed86185\"" Feb 13 20:45:13.252269 containerd[1690]: time="2025-02-13T20:45:13.252109341Z" level=info msg="CreateContainer within sandbox \"cd0c3122b490527abed758ca7e00074431c05b32bd900eaad211c1957ed86185\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:45:13.284749 containerd[1690]: time="2025-02-13T20:45:13.284705069Z" level=info msg="CreateContainer within sandbox \"cd0c3122b490527abed758ca7e00074431c05b32bd900eaad211c1957ed86185\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"95356f47a66939611c5f4196dcba0833c4ffed0c3038daba84266dcdd54ced46\"" Feb 13 20:45:13.285307 containerd[1690]: time="2025-02-13T20:45:13.285276080Z" level=info msg="StartContainer for \"95356f47a66939611c5f4196dcba0833c4ffed0c3038daba84266dcdd54ced46\"" Feb 13 20:45:13.294309 containerd[1690]: time="2025-02-13T20:45:13.293904846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-kszz2,Uid:1edd1e70-a3e0-4c98-bdaf-820922abe4d3,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:45:13.317140 systemd[1]: Started cri-containerd-95356f47a66939611c5f4196dcba0833c4ffed0c3038daba84266dcdd54ced46.scope - libcontainer container 95356f47a66939611c5f4196dcba0833c4ffed0c3038daba84266dcdd54ced46. Feb 13 20:45:13.345232 containerd[1690]: time="2025-02-13T20:45:13.343703905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:13.345232 containerd[1690]: time="2025-02-13T20:45:13.343759006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:13.345232 containerd[1690]: time="2025-02-13T20:45:13.343773806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:13.345232 containerd[1690]: time="2025-02-13T20:45:13.343860108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:13.359667 containerd[1690]: time="2025-02-13T20:45:13.359379907Z" level=info msg="StartContainer for \"95356f47a66939611c5f4196dcba0833c4ffed0c3038daba84266dcdd54ced46\" returns successfully" Feb 13 20:45:13.371551 systemd[1]: Started cri-containerd-7b3c425945ae27b607c61e1dfaadfc82092f96a346bc3a80e615db95251bcc38.scope - libcontainer container 7b3c425945ae27b607c61e1dfaadfc82092f96a346bc3a80e615db95251bcc38. Feb 13 20:45:13.426051 containerd[1690]: time="2025-02-13T20:45:13.425924489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-kszz2,Uid:1edd1e70-a3e0-4c98-bdaf-820922abe4d3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7b3c425945ae27b607c61e1dfaadfc82092f96a346bc3a80e615db95251bcc38\"" Feb 13 20:45:13.428381 containerd[1690]: time="2025-02-13T20:45:13.428155532Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:45:13.724542 kubelet[3212]: I0213 20:45:13.724417 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-spttg" podStartSLOduration=1.724349836 podStartE2EDuration="1.724349836s" podCreationTimestamp="2025-02-13 20:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:13.724101131 +0000 UTC m=+7.731253201" watchObservedRunningTime="2025-02-13 20:45:13.724349836 +0000 UTC m=+7.731501806" Feb 13 20:45:15.010903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823133431.mount: Deactivated successfully. Feb 13 20:45:15.797276 containerd[1690]: time="2025-02-13T20:45:15.797222260Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:15.799995 containerd[1690]: time="2025-02-13T20:45:15.799919012Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:45:15.803859 containerd[1690]: time="2025-02-13T20:45:15.803696385Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:15.809144 containerd[1690]: time="2025-02-13T20:45:15.809093088Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:15.810341 containerd[1690]: time="2025-02-13T20:45:15.809761701Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.381571769s" Feb 13 20:45:15.810341 containerd[1690]: time="2025-02-13T20:45:15.809798002Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:45:15.812084 containerd[1690]: time="2025-02-13T20:45:15.812052245Z" level=info msg="CreateContainer within sandbox \"7b3c425945ae27b607c61e1dfaadfc82092f96a346bc3a80e615db95251bcc38\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:45:15.840980 containerd[1690]: time="2025-02-13T20:45:15.840934102Z" level=info msg="CreateContainer within sandbox \"7b3c425945ae27b607c61e1dfaadfc82092f96a346bc3a80e615db95251bcc38\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e6b2496bc27a9cef849045b1b6fb2b44409939f92ac5687af9831be30e983110\"" Feb 13 20:45:15.842373 containerd[1690]: time="2025-02-13T20:45:15.841542913Z" level=info msg="StartContainer for \"e6b2496bc27a9cef849045b1b6fb2b44409939f92ac5687af9831be30e983110\"" Feb 13 20:45:15.884262 systemd[1]: Started cri-containerd-e6b2496bc27a9cef849045b1b6fb2b44409939f92ac5687af9831be30e983110.scope - libcontainer container e6b2496bc27a9cef849045b1b6fb2b44409939f92ac5687af9831be30e983110. Feb 13 20:45:15.912527 containerd[1690]: time="2025-02-13T20:45:15.912481080Z" level=info msg="StartContainer for \"e6b2496bc27a9cef849045b1b6fb2b44409939f92ac5687af9831be30e983110\" returns successfully" Feb 13 20:45:18.930029 kubelet[3212]: I0213 20:45:18.928257 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-kszz2" podStartSLOduration=4.545287068 podStartE2EDuration="6.928236263s" podCreationTimestamp="2025-02-13 20:45:12 +0000 UTC" firstStartedPulling="2025-02-13 20:45:13.427640422 +0000 UTC m=+7.434792492" lastFinishedPulling="2025-02-13 20:45:15.810589717 +0000 UTC m=+9.817741687" observedRunningTime="2025-02-13 20:45:16.741503947 +0000 UTC m=+10.748655917" watchObservedRunningTime="2025-02-13 20:45:18.928236263 +0000 UTC m=+12.935388333" Feb 13 20:45:18.947442 systemd[1]: Created slice kubepods-besteffort-poda0419dd9_e06f_48b0_9ab7_1e9c46108ddd.slice - libcontainer container kubepods-besteffort-poda0419dd9_e06f_48b0_9ab7_1e9c46108ddd.slice. Feb 13 20:45:19.013086 kubelet[3212]: I0213 20:45:19.012888 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2spm\" (UniqueName: \"kubernetes.io/projected/a0419dd9-e06f-48b0-9ab7-1e9c46108ddd-kube-api-access-h2spm\") pod \"calico-typha-58b9d9d59f-bsf5q\" (UID: \"a0419dd9-e06f-48b0-9ab7-1e9c46108ddd\") " pod="calico-system/calico-typha-58b9d9d59f-bsf5q" Feb 13 20:45:19.013086 kubelet[3212]: I0213 20:45:19.012936 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0419dd9-e06f-48b0-9ab7-1e9c46108ddd-tigera-ca-bundle\") pod \"calico-typha-58b9d9d59f-bsf5q\" (UID: \"a0419dd9-e06f-48b0-9ab7-1e9c46108ddd\") " pod="calico-system/calico-typha-58b9d9d59f-bsf5q" Feb 13 20:45:19.013086 kubelet[3212]: I0213 20:45:19.013015 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a0419dd9-e06f-48b0-9ab7-1e9c46108ddd-typha-certs\") pod \"calico-typha-58b9d9d59f-bsf5q\" (UID: \"a0419dd9-e06f-48b0-9ab7-1e9c46108ddd\") " pod="calico-system/calico-typha-58b9d9d59f-bsf5q" Feb 13 20:45:19.057103 kubelet[3212]: W0213 20:45:19.057017 3212 reflector.go:569] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4081.3.1-a-d679334e6e" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.1-a-d679334e6e' and this object Feb 13 20:45:19.057103 kubelet[3212]: E0213 20:45:19.057070 3212 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:ci-4081.3.1-a-d679334e6e\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.1-a-d679334e6e' and this object" logger="UnhandledError" Feb 13 20:45:19.063509 systemd[1]: Created slice kubepods-besteffort-podab1467f8_f620_4904_ae12_626d0b1efb5c.slice - libcontainer container kubepods-besteffort-podab1467f8_f620_4904_ae12_626d0b1efb5c.slice. Feb 13 20:45:19.114990 kubelet[3212]: I0213 20:45:19.113166 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ab1467f8-f620-4904-ae12-626d0b1efb5c-cni-net-dir\") pod \"calico-node-tdtrv\" (UID: \"ab1467f8-f620-4904-ae12-626d0b1efb5c\") " pod="calico-system/calico-node-tdtrv" Feb 13 20:45:19.114990 kubelet[3212]: I0213 20:45:19.113211 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab1467f8-f620-4904-ae12-626d0b1efb5c-tigera-ca-bundle\") pod \"calico-node-tdtrv\" (UID: \"ab1467f8-f620-4904-ae12-626d0b1efb5c\") " pod="calico-system/calico-node-tdtrv" Feb 13 20:45:19.114990 kubelet[3212]: I0213 20:45:19.113235 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ab1467f8-f620-4904-ae12-626d0b1efb5c-node-certs\") pod \"calico-node-tdtrv\" (UID: \"ab1467f8-f620-4904-ae12-626d0b1efb5c\") " pod="calico-system/calico-node-tdtrv" Feb 13 20:45:19.114990 kubelet[3212]: I0213 20:45:19.113256 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ab1467f8-f620-4904-ae12-626d0b1efb5c-flexvol-driver-host\") pod \"calico-node-tdtrv\" (UID: \"ab1467f8-f620-4904-ae12-626d0b1efb5c\") " pod="calico-system/calico-node-tdtrv" Feb 13 20:45:19.114990 kubelet[3212]: I0213 20:45:19.113278 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ab1467f8-f620-4904-ae12-626d0b1efb5c-var-lib-calico\") pod \"calico-node-tdtrv\" (UID: \"ab1467f8-f620-4904-ae12-626d0b1efb5c\") " pod="calico-system/calico-node-tdtrv" Feb 13 20:45:19.115305 kubelet[3212]: I0213 20:45:19.113315 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab1467f8-f620-4904-ae12-626d0b1efb5c-xtables-lock\") pod \"calico-node-tdtrv\" (UID: \"ab1467f8-f620-4904-ae12-626d0b1efb5c\") " pod="calico-system/calico-node-tdtrv" Feb 13 20:45:19.115305 kubelet[3212]: I0213 20:45:19.113338 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ab1467f8-f620-4904-ae12-626d0b1efb5c-var-run-calico\") pod \"calico-node-tdtrv\" (UID: \"ab1467f8-f620-4904-ae12-626d0b1efb5c\") " pod="calico-system/calico-node-tdtrv" Feb 13 20:45:19.115305 kubelet[3212]: I0213 20:45:19.113360 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2pth\" (UniqueName: \"kubernetes.io/projected/ab1467f8-f620-4904-ae12-626d0b1efb5c-kube-api-access-g2pth\") pod \"calico-node-tdtrv\" (UID: \"ab1467f8-f620-4904-ae12-626d0b1efb5c\") " pod="calico-system/calico-node-tdtrv" Feb 13 20:45:19.115305 kubelet[3212]: I0213 20:45:19.113384 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ab1467f8-f620-4904-ae12-626d0b1efb5c-policysync\") pod \"calico-node-tdtrv\" (UID: \"ab1467f8-f620-4904-ae12-626d0b1efb5c\") " pod="calico-system/calico-node-tdtrv" Feb 13 20:45:19.115305 kubelet[3212]: I0213 20:45:19.113405 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ab1467f8-f620-4904-ae12-626d0b1efb5c-cni-log-dir\") pod \"calico-node-tdtrv\" (UID: \"ab1467f8-f620-4904-ae12-626d0b1efb5c\") " pod="calico-system/calico-node-tdtrv" Feb 13 20:45:19.115511 kubelet[3212]: I0213 20:45:19.113429 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab1467f8-f620-4904-ae12-626d0b1efb5c-lib-modules\") pod \"calico-node-tdtrv\" (UID: \"ab1467f8-f620-4904-ae12-626d0b1efb5c\") " pod="calico-system/calico-node-tdtrv" Feb 13 20:45:19.115511 kubelet[3212]: I0213 20:45:19.113463 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ab1467f8-f620-4904-ae12-626d0b1efb5c-cni-bin-dir\") pod \"calico-node-tdtrv\" (UID: \"ab1467f8-f620-4904-ae12-626d0b1efb5c\") " pod="calico-system/calico-node-tdtrv" Feb 13 20:45:19.197420 kubelet[3212]: E0213 20:45:19.197015 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jn87" podUID="bdc0f384-e713-4490-b32f-30642a7169b0" Feb 13 20:45:19.217526 kubelet[3212]: I0213 20:45:19.217499 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bdc0f384-e713-4490-b32f-30642a7169b0-socket-dir\") pod \"csi-node-driver-4jn87\" (UID: \"bdc0f384-e713-4490-b32f-30642a7169b0\") " pod="calico-system/csi-node-driver-4jn87" Feb 13 20:45:19.217774 kubelet[3212]: I0213 20:45:19.217756 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bdc0f384-e713-4490-b32f-30642a7169b0-kubelet-dir\") pod \"csi-node-driver-4jn87\" (UID: \"bdc0f384-e713-4490-b32f-30642a7169b0\") " pod="calico-system/csi-node-driver-4jn87" Feb 13 20:45:19.217978 kubelet[3212]: I0213 20:45:19.217946 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bdc0f384-e713-4490-b32f-30642a7169b0-varrun\") pod \"csi-node-driver-4jn87\" (UID: \"bdc0f384-e713-4490-b32f-30642a7169b0\") " pod="calico-system/csi-node-driver-4jn87" Feb 13 20:45:19.219484 kubelet[3212]: I0213 20:45:19.219371 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8szw\" (UniqueName: \"kubernetes.io/projected/bdc0f384-e713-4490-b32f-30642a7169b0-kube-api-access-p8szw\") pod \"csi-node-driver-4jn87\" (UID: \"bdc0f384-e713-4490-b32f-30642a7169b0\") " pod="calico-system/csi-node-driver-4jn87" Feb 13 20:45:19.219665 kubelet[3212]: I0213 20:45:19.219642 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bdc0f384-e713-4490-b32f-30642a7169b0-registration-dir\") pod \"csi-node-driver-4jn87\" (UID: \"bdc0f384-e713-4490-b32f-30642a7169b0\") " pod="calico-system/csi-node-driver-4jn87" Feb 13 20:45:19.227794 kubelet[3212]: E0213 20:45:19.227708 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.227794 kubelet[3212]: W0213 20:45:19.227730 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.227794 kubelet[3212]: E0213 20:45:19.227760 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.243246 kubelet[3212]: E0213 20:45:19.243220 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.243246 kubelet[3212]: W0213 20:45:19.243242 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.244995 kubelet[3212]: E0213 20:45:19.243262 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.257628 containerd[1690]: time="2025-02-13T20:45:19.257591307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58b9d9d59f-bsf5q,Uid:a0419dd9-e06f-48b0-9ab7-1e9c46108ddd,Namespace:calico-system,Attempt:0,}" Feb 13 20:45:19.318163 containerd[1690]: time="2025-02-13T20:45:19.318056071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:19.318674 containerd[1690]: time="2025-02-13T20:45:19.318447079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:19.318940 containerd[1690]: time="2025-02-13T20:45:19.318803686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:19.319362 containerd[1690]: time="2025-02-13T20:45:19.319216894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:19.323477 kubelet[3212]: E0213 20:45:19.323123 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.323477 kubelet[3212]: W0213 20:45:19.323145 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.323477 kubelet[3212]: E0213 20:45:19.323172 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.329247 kubelet[3212]: E0213 20:45:19.329053 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.329247 kubelet[3212]: W0213 20:45:19.329073 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.329247 kubelet[3212]: E0213 20:45:19.329110 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.329472 kubelet[3212]: E0213 20:45:19.329396 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.329472 kubelet[3212]: W0213 20:45:19.329411 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.329472 kubelet[3212]: E0213 20:45:19.329428 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.329978 kubelet[3212]: E0213 20:45:19.329623 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.329978 kubelet[3212]: W0213 20:45:19.329637 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.329978 kubelet[3212]: E0213 20:45:19.329667 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.332020 kubelet[3212]: E0213 20:45:19.331995 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.332110 kubelet[3212]: W0213 20:45:19.332023 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.332110 kubelet[3212]: E0213 20:45:19.332053 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.332656 kubelet[3212]: E0213 20:45:19.332552 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.332656 kubelet[3212]: W0213 20:45:19.332572 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.334554 kubelet[3212]: E0213 20:45:19.334389 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.335183 kubelet[3212]: E0213 20:45:19.335083 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.335183 kubelet[3212]: W0213 20:45:19.335099 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.335705 kubelet[3212]: E0213 20:45:19.335422 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.336406 kubelet[3212]: E0213 20:45:19.336289 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.336406 kubelet[3212]: W0213 20:45:19.336310 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.336406 kubelet[3212]: E0213 20:45:19.336329 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.337824 kubelet[3212]: E0213 20:45:19.337648 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.337824 kubelet[3212]: W0213 20:45:19.337670 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.337824 kubelet[3212]: E0213 20:45:19.337779 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.339262 kubelet[3212]: E0213 20:45:19.338979 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.339262 kubelet[3212]: W0213 20:45:19.338997 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.339262 kubelet[3212]: E0213 20:45:19.339094 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.339478 kubelet[3212]: E0213 20:45:19.339285 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.339478 kubelet[3212]: W0213 20:45:19.339296 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.340401 kubelet[3212]: E0213 20:45:19.340381 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.340715 kubelet[3212]: E0213 20:45:19.340634 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.340715 kubelet[3212]: W0213 20:45:19.340646 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.341534 kubelet[3212]: E0213 20:45:19.341028 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.342063 kubelet[3212]: E0213 20:45:19.342044 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.342143 kubelet[3212]: W0213 20:45:19.342064 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.342979 kubelet[3212]: E0213 20:45:19.342706 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.344869 kubelet[3212]: E0213 20:45:19.344843 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.344869 kubelet[3212]: W0213 20:45:19.344863 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.345049 kubelet[3212]: E0213 20:45:19.344950 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.345194 kubelet[3212]: E0213 20:45:19.345126 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.345194 kubelet[3212]: W0213 20:45:19.345145 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.348373 kubelet[3212]: E0213 20:45:19.348146 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.348373 kubelet[3212]: E0213 20:45:19.348342 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.348373 kubelet[3212]: W0213 20:45:19.348356 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.348517 kubelet[3212]: E0213 20:45:19.348418 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.350378 kubelet[3212]: E0213 20:45:19.350121 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.350378 kubelet[3212]: W0213 20:45:19.350138 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.350378 kubelet[3212]: E0213 20:45:19.350226 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.353537 kubelet[3212]: E0213 20:45:19.353513 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.353537 kubelet[3212]: W0213 20:45:19.353534 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.353777 kubelet[3212]: E0213 20:45:19.353759 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.353839 kubelet[3212]: W0213 20:45:19.353777 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.354882 kubelet[3212]: E0213 20:45:19.354037 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.354882 kubelet[3212]: W0213 20:45:19.354052 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.354882 kubelet[3212]: E0213 20:45:19.354067 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.354882 kubelet[3212]: E0213 20:45:19.354309 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.354882 kubelet[3212]: W0213 20:45:19.354323 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.354882 kubelet[3212]: E0213 20:45:19.354336 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.354882 kubelet[3212]: E0213 20:45:19.354581 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.354882 kubelet[3212]: W0213 20:45:19.354591 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.354882 kubelet[3212]: E0213 20:45:19.354605 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.354882 kubelet[3212]: E0213 20:45:19.354631 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.355333 kubelet[3212]: E0213 20:45:19.354820 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.355333 kubelet[3212]: W0213 20:45:19.354832 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.355333 kubelet[3212]: E0213 20:45:19.354848 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.355333 kubelet[3212]: E0213 20:45:19.355329 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.355514 kubelet[3212]: W0213 20:45:19.355344 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.355514 kubelet[3212]: E0213 20:45:19.355358 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.355514 kubelet[3212]: E0213 20:45:19.355382 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.355973 kubelet[3212]: E0213 20:45:19.355936 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.356053 kubelet[3212]: W0213 20:45:19.355989 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.356053 kubelet[3212]: E0213 20:45:19.356007 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.356923 systemd[1]: Started cri-containerd-afcf0a392108c4efb8f4253543a0bbfe059dd3f12d92098fa4a7c8b4890a1916.scope - libcontainer container afcf0a392108c4efb8f4253543a0bbfe059dd3f12d92098fa4a7c8b4890a1916. Feb 13 20:45:19.376532 kubelet[3212]: E0213 20:45:19.376453 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:19.376532 kubelet[3212]: W0213 20:45:19.376473 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:19.376532 kubelet[3212]: E0213 20:45:19.376494 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:19.417988 containerd[1690]: time="2025-02-13T20:45:19.417930093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58b9d9d59f-bsf5q,Uid:a0419dd9-e06f-48b0-9ab7-1e9c46108ddd,Namespace:calico-system,Attempt:0,} returns sandbox id \"afcf0a392108c4efb8f4253543a0bbfe059dd3f12d92098fa4a7c8b4890a1916\"" Feb 13 20:45:19.420266 containerd[1690]: time="2025-02-13T20:45:19.420155336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:45:20.016989 kubelet[3212]: E0213 20:45:20.016947 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:20.016989 kubelet[3212]: W0213 20:45:20.016977 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:20.016989 kubelet[3212]: E0213 20:45:20.017000 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:20.268441 containerd[1690]: time="2025-02-13T20:45:20.268324435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tdtrv,Uid:ab1467f8-f620-4904-ae12-626d0b1efb5c,Namespace:calico-system,Attempt:0,}" Feb 13 20:45:20.309844 containerd[1690]: time="2025-02-13T20:45:20.309555127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:20.309844 containerd[1690]: time="2025-02-13T20:45:20.309621229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:20.309844 containerd[1690]: time="2025-02-13T20:45:20.309653229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:20.309844 containerd[1690]: time="2025-02-13T20:45:20.309755331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:20.339115 systemd[1]: Started cri-containerd-4afc222a27f6a461649cddeeca15d87703f0917b96b53d2614d7339ae2d7d3d2.scope - libcontainer container 4afc222a27f6a461649cddeeca15d87703f0917b96b53d2614d7339ae2d7d3d2. Feb 13 20:45:20.360084 containerd[1690]: time="2025-02-13T20:45:20.360029897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tdtrv,Uid:ab1467f8-f620-4904-ae12-626d0b1efb5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4afc222a27f6a461649cddeeca15d87703f0917b96b53d2614d7339ae2d7d3d2\"" Feb 13 20:45:20.652135 kubelet[3212]: E0213 20:45:20.650929 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jn87" podUID="bdc0f384-e713-4490-b32f-30642a7169b0" Feb 13 20:45:21.123486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2350521048.mount: Deactivated successfully. Feb 13 20:45:22.075157 containerd[1690]: time="2025-02-13T20:45:22.075106655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:22.078148 containerd[1690]: time="2025-02-13T20:45:22.078082812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 20:45:22.081149 containerd[1690]: time="2025-02-13T20:45:22.081098270Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:22.086116 containerd[1690]: time="2025-02-13T20:45:22.086087266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:22.087253 containerd[1690]: time="2025-02-13T20:45:22.086704178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.665700126s" Feb 13 20:45:22.087253 containerd[1690]: time="2025-02-13T20:45:22.086742678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:45:22.088505 containerd[1690]: time="2025-02-13T20:45:22.088350509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:45:22.112358 containerd[1690]: time="2025-02-13T20:45:22.112204968Z" level=info msg="CreateContainer within sandbox \"afcf0a392108c4efb8f4253543a0bbfe059dd3f12d92098fa4a7c8b4890a1916\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:45:22.152862 containerd[1690]: time="2025-02-13T20:45:22.152825048Z" level=info msg="CreateContainer within sandbox \"afcf0a392108c4efb8f4253543a0bbfe059dd3f12d92098fa4a7c8b4890a1916\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e88dbfb60f9ecc9e64e9a175d916516d08bfd57e22606d98a5e545b3146315b1\"" Feb 13 20:45:22.153483 containerd[1690]: time="2025-02-13T20:45:22.153364859Z" level=info msg="StartContainer for \"e88dbfb60f9ecc9e64e9a175d916516d08bfd57e22606d98a5e545b3146315b1\"" Feb 13 20:45:22.187131 systemd[1]: Started cri-containerd-e88dbfb60f9ecc9e64e9a175d916516d08bfd57e22606d98a5e545b3146315b1.scope - libcontainer container e88dbfb60f9ecc9e64e9a175d916516d08bfd57e22606d98a5e545b3146315b1. Feb 13 20:45:22.235002 containerd[1690]: time="2025-02-13T20:45:22.234922226Z" level=info msg="StartContainer for \"e88dbfb60f9ecc9e64e9a175d916516d08bfd57e22606d98a5e545b3146315b1\" returns successfully" Feb 13 20:45:22.651767 kubelet[3212]: E0213 20:45:22.650549 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jn87" podUID="bdc0f384-e713-4490-b32f-30642a7169b0" Feb 13 20:45:22.736178 kubelet[3212]: E0213 20:45:22.734953 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.736178 kubelet[3212]: W0213 20:45:22.734992 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.736178 kubelet[3212]: E0213 20:45:22.735012 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.736178 kubelet[3212]: E0213 20:45:22.735510 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.736178 kubelet[3212]: W0213 20:45:22.735523 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.736178 kubelet[3212]: E0213 20:45:22.735557 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.736178 kubelet[3212]: E0213 20:45:22.735838 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.736178 kubelet[3212]: W0213 20:45:22.735851 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.736178 kubelet[3212]: E0213 20:45:22.735886 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.736693 kubelet[3212]: E0213 20:45:22.736225 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.736693 kubelet[3212]: W0213 20:45:22.736235 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.736693 kubelet[3212]: E0213 20:45:22.736249 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.736843 kubelet[3212]: E0213 20:45:22.736702 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.736843 kubelet[3212]: W0213 20:45:22.736715 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.736843 kubelet[3212]: E0213 20:45:22.736730 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.739013 kubelet[3212]: E0213 20:45:22.736948 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.739013 kubelet[3212]: W0213 20:45:22.736993 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.739013 kubelet[3212]: E0213 20:45:22.737009 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.739013 kubelet[3212]: E0213 20:45:22.737266 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.739013 kubelet[3212]: W0213 20:45:22.737277 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.739013 kubelet[3212]: E0213 20:45:22.737301 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.739013 kubelet[3212]: E0213 20:45:22.737515 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.739013 kubelet[3212]: W0213 20:45:22.737537 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.739013 kubelet[3212]: E0213 20:45:22.737550 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.739013 kubelet[3212]: E0213 20:45:22.737782 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.739464 kubelet[3212]: W0213 20:45:22.737792 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.739464 kubelet[3212]: E0213 20:45:22.737803 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.739464 kubelet[3212]: E0213 20:45:22.738086 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.739464 kubelet[3212]: W0213 20:45:22.738097 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.739464 kubelet[3212]: E0213 20:45:22.738122 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.739464 kubelet[3212]: E0213 20:45:22.738327 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.739464 kubelet[3212]: W0213 20:45:22.738345 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.739464 kubelet[3212]: E0213 20:45:22.738360 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.739464 kubelet[3212]: E0213 20:45:22.738556 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.739464 kubelet[3212]: W0213 20:45:22.738566 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.739867 kubelet[3212]: E0213 20:45:22.738578 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.739867 kubelet[3212]: E0213 20:45:22.738795 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.739867 kubelet[3212]: W0213 20:45:22.738805 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.739867 kubelet[3212]: E0213 20:45:22.738816 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.739867 kubelet[3212]: E0213 20:45:22.739056 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.739867 kubelet[3212]: W0213 20:45:22.739067 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.739867 kubelet[3212]: E0213 20:45:22.739080 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.739867 kubelet[3212]: E0213 20:45:22.739313 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.739867 kubelet[3212]: W0213 20:45:22.739323 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.739867 kubelet[3212]: E0213 20:45:22.739334 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.747399 kubelet[3212]: I0213 20:45:22.747348 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58b9d9d59f-bsf5q" podStartSLOduration=2.078975497 podStartE2EDuration="4.747333073s" podCreationTimestamp="2025-02-13 20:45:18 +0000 UTC" firstStartedPulling="2025-02-13 20:45:19.419215518 +0000 UTC m=+13.426367488" lastFinishedPulling="2025-02-13 20:45:22.087573094 +0000 UTC m=+16.094725064" observedRunningTime="2025-02-13 20:45:22.746823563 +0000 UTC m=+16.753975633" watchObservedRunningTime="2025-02-13 20:45:22.747333073 +0000 UTC m=+16.754485143" Feb 13 20:45:22.764774 kubelet[3212]: E0213 20:45:22.764748 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.764774 kubelet[3212]: W0213 20:45:22.764769 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.764915 kubelet[3212]: E0213 20:45:22.764785 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.765981 kubelet[3212]: E0213 20:45:22.765456 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.765981 kubelet[3212]: W0213 20:45:22.765476 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.765981 kubelet[3212]: E0213 20:45:22.765673 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.765981 kubelet[3212]: E0213 20:45:22.765968 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.765981 kubelet[3212]: W0213 20:45:22.765983 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.766271 kubelet[3212]: E0213 20:45:22.766002 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.766973 kubelet[3212]: E0213 20:45:22.766604 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.766973 kubelet[3212]: W0213 20:45:22.766619 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.766973 kubelet[3212]: E0213 20:45:22.766650 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.768297 kubelet[3212]: E0213 20:45:22.767181 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.768297 kubelet[3212]: W0213 20:45:22.767197 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.768297 kubelet[3212]: E0213 20:45:22.767381 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.768297 kubelet[3212]: E0213 20:45:22.768135 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.768297 kubelet[3212]: W0213 20:45:22.768149 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.768297 kubelet[3212]: E0213 20:45:22.768167 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.768914 kubelet[3212]: E0213 20:45:22.768894 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.769004 kubelet[3212]: W0213 20:45:22.768932 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.769060 kubelet[3212]: E0213 20:45:22.769010 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.769692 kubelet[3212]: E0213 20:45:22.769669 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.769692 kubelet[3212]: W0213 20:45:22.769689 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.769938 kubelet[3212]: E0213 20:45:22.769914 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.770549 kubelet[3212]: E0213 20:45:22.770528 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.770549 kubelet[3212]: W0213 20:45:22.770547 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.770826 kubelet[3212]: E0213 20:45:22.770708 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.771074 kubelet[3212]: E0213 20:45:22.771055 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.771074 kubelet[3212]: W0213 20:45:22.771069 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.771231 kubelet[3212]: E0213 20:45:22.771158 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.771370 kubelet[3212]: E0213 20:45:22.771354 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.771370 kubelet[3212]: W0213 20:45:22.771367 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.771562 kubelet[3212]: E0213 20:45:22.771456 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.772155 kubelet[3212]: E0213 20:45:22.772130 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.772155 kubelet[3212]: W0213 20:45:22.772146 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.772293 kubelet[3212]: E0213 20:45:22.772166 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.772421 kubelet[3212]: E0213 20:45:22.772405 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.772481 kubelet[3212]: W0213 20:45:22.772436 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.772607 kubelet[3212]: E0213 20:45:22.772533 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.772760 kubelet[3212]: E0213 20:45:22.772686 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.772760 kubelet[3212]: W0213 20:45:22.772697 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.773010 kubelet[3212]: E0213 20:45:22.772859 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.773249 kubelet[3212]: E0213 20:45:22.773067 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.773249 kubelet[3212]: W0213 20:45:22.773078 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.773249 kubelet[3212]: E0213 20:45:22.773093 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.773987 kubelet[3212]: E0213 20:45:22.773912 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.773987 kubelet[3212]: W0213 20:45:22.773930 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.773987 kubelet[3212]: E0213 20:45:22.773946 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.775033 kubelet[3212]: E0213 20:45:22.775016 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.776784 kubelet[3212]: W0213 20:45:22.775393 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.776784 kubelet[3212]: E0213 20:45:22.775459 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:22.777021 kubelet[3212]: E0213 20:45:22.777006 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:22.777107 kubelet[3212]: W0213 20:45:22.777095 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:22.777188 kubelet[3212]: E0213 20:45:22.777171 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.707513 containerd[1690]: time="2025-02-13T20:45:23.707410522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:23.710534 containerd[1690]: time="2025-02-13T20:45:23.710322978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 20:45:23.714319 containerd[1690]: time="2025-02-13T20:45:23.714124151Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:23.719455 containerd[1690]: time="2025-02-13T20:45:23.719281850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:23.720675 containerd[1690]: time="2025-02-13T20:45:23.720300670Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.631912959s" Feb 13 20:45:23.720675 containerd[1690]: time="2025-02-13T20:45:23.720342070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:45:23.722520 containerd[1690]: time="2025-02-13T20:45:23.722490812Z" level=info msg="CreateContainer within sandbox \"4afc222a27f6a461649cddeeca15d87703f0917b96b53d2614d7339ae2d7d3d2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:45:23.744215 kubelet[3212]: E0213 20:45:23.744187 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.744215 kubelet[3212]: W0213 20:45:23.744205 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.744637 kubelet[3212]: E0213 20:45:23.744228 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.744637 kubelet[3212]: E0213 20:45:23.744473 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.744637 kubelet[3212]: W0213 20:45:23.744485 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.744637 kubelet[3212]: E0213 20:45:23.744500 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.744804 kubelet[3212]: E0213 20:45:23.744701 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.744804 kubelet[3212]: W0213 20:45:23.744712 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.744804 kubelet[3212]: E0213 20:45:23.744726 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.745046 kubelet[3212]: E0213 20:45:23.745027 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.745046 kubelet[3212]: W0213 20:45:23.745043 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.745276 kubelet[3212]: E0213 20:45:23.745060 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.745447 kubelet[3212]: E0213 20:45:23.745305 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.745447 kubelet[3212]: W0213 20:45:23.745318 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.745447 kubelet[3212]: E0213 20:45:23.745333 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.745749 kubelet[3212]: E0213 20:45:23.745642 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.745749 kubelet[3212]: W0213 20:45:23.745655 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.745749 kubelet[3212]: E0213 20:45:23.745670 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.746138 kubelet[3212]: E0213 20:45:23.746062 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.746138 kubelet[3212]: W0213 20:45:23.746076 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.746138 kubelet[3212]: E0213 20:45:23.746090 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.746697 kubelet[3212]: E0213 20:45:23.746534 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.746697 kubelet[3212]: W0213 20:45:23.746565 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.746697 kubelet[3212]: E0213 20:45:23.746579 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.747371 kubelet[3212]: E0213 20:45:23.747237 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.747371 kubelet[3212]: W0213 20:45:23.747259 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.747371 kubelet[3212]: E0213 20:45:23.747274 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.747880 kubelet[3212]: E0213 20:45:23.747693 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.747880 kubelet[3212]: W0213 20:45:23.747722 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.747880 kubelet[3212]: E0213 20:45:23.747736 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.748319 kubelet[3212]: E0213 20:45:23.748219 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.748319 kubelet[3212]: W0213 20:45:23.748232 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.748319 kubelet[3212]: E0213 20:45:23.748245 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.748892 kubelet[3212]: E0213 20:45:23.748701 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.748892 kubelet[3212]: W0213 20:45:23.748735 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.748892 kubelet[3212]: E0213 20:45:23.748749 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.749384 kubelet[3212]: E0213 20:45:23.749204 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.749384 kubelet[3212]: W0213 20:45:23.749216 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.749384 kubelet[3212]: E0213 20:45:23.749229 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.749899 kubelet[3212]: E0213 20:45:23.749751 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.749899 kubelet[3212]: W0213 20:45:23.749785 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.749899 kubelet[3212]: E0213 20:45:23.749799 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.750511 kubelet[3212]: E0213 20:45:23.750305 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.750511 kubelet[3212]: W0213 20:45:23.750318 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.750511 kubelet[3212]: E0213 20:45:23.750331 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.762861 containerd[1690]: time="2025-02-13T20:45:23.762828787Z" level=info msg="CreateContainer within sandbox \"4afc222a27f6a461649cddeeca15d87703f0917b96b53d2614d7339ae2d7d3d2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5675a79eeef055d7adcce797669cd8d0ee1f611c6973dcea9865403e076412b1\"" Feb 13 20:45:23.763359 containerd[1690]: time="2025-02-13T20:45:23.763334297Z" level=info msg="StartContainer for \"5675a79eeef055d7adcce797669cd8d0ee1f611c6973dcea9865403e076412b1\"" Feb 13 20:45:23.773935 kubelet[3212]: E0213 20:45:23.773757 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.773935 kubelet[3212]: W0213 20:45:23.773780 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.773935 kubelet[3212]: E0213 20:45:23.773797 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.774351 kubelet[3212]: E0213 20:45:23.774337 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.774555 kubelet[3212]: W0213 20:45:23.774435 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.774555 kubelet[3212]: E0213 20:45:23.774468 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.775080 kubelet[3212]: E0213 20:45:23.774898 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.775080 kubelet[3212]: W0213 20:45:23.774913 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.775080 kubelet[3212]: E0213 20:45:23.774932 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.775530 kubelet[3212]: E0213 20:45:23.775393 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.775530 kubelet[3212]: W0213 20:45:23.775408 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.775530 kubelet[3212]: E0213 20:45:23.775493 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.782346 kubelet[3212]: E0213 20:45:23.780726 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.782346 kubelet[3212]: W0213 20:45:23.780742 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.782346 kubelet[3212]: E0213 20:45:23.780837 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.782346 kubelet[3212]: E0213 20:45:23.781013 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.782346 kubelet[3212]: W0213 20:45:23.781024 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.782346 kubelet[3212]: E0213 20:45:23.781078 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.782346 kubelet[3212]: E0213 20:45:23.781252 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.782346 kubelet[3212]: W0213 20:45:23.781261 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.782346 kubelet[3212]: E0213 20:45:23.781330 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.782346 kubelet[3212]: E0213 20:45:23.781443 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.782879 kubelet[3212]: W0213 20:45:23.781451 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.782879 kubelet[3212]: E0213 20:45:23.781465 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.782879 kubelet[3212]: E0213 20:45:23.781646 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.782879 kubelet[3212]: W0213 20:45:23.781655 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.782879 kubelet[3212]: E0213 20:45:23.781675 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.782879 kubelet[3212]: E0213 20:45:23.781891 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.782879 kubelet[3212]: W0213 20:45:23.781901 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.782879 kubelet[3212]: E0213 20:45:23.781916 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.782879 kubelet[3212]: E0213 20:45:23.782398 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.782879 kubelet[3212]: W0213 20:45:23.782410 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.785851 kubelet[3212]: E0213 20:45:23.782437 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.785851 kubelet[3212]: E0213 20:45:23.782690 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.785851 kubelet[3212]: W0213 20:45:23.782700 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.785851 kubelet[3212]: E0213 20:45:23.782904 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.785851 kubelet[3212]: E0213 20:45:23.783023 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.785851 kubelet[3212]: W0213 20:45:23.783033 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.785851 kubelet[3212]: E0213 20:45:23.783085 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.785851 kubelet[3212]: E0213 20:45:23.783907 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.785851 kubelet[3212]: W0213 20:45:23.783920 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.785851 kubelet[3212]: E0213 20:45:23.784076 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.786254 kubelet[3212]: E0213 20:45:23.784744 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.786254 kubelet[3212]: W0213 20:45:23.784756 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.786254 kubelet[3212]: E0213 20:45:23.784872 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.786254 kubelet[3212]: E0213 20:45:23.786018 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.786254 kubelet[3212]: W0213 20:45:23.786031 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.786254 kubelet[3212]: E0213 20:45:23.786050 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.786694 kubelet[3212]: E0213 20:45:23.786681 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.786873 kubelet[3212]: W0213 20:45:23.786779 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.787005 kubelet[3212]: E0213 20:45:23.786945 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.787264 kubelet[3212]: E0213 20:45:23.787208 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:23.787264 kubelet[3212]: W0213 20:45:23.787223 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:23.787264 kubelet[3212]: E0213 20:45:23.787237 3212 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:23.805158 systemd[1]: Started cri-containerd-5675a79eeef055d7adcce797669cd8d0ee1f611c6973dcea9865403e076412b1.scope - libcontainer container 5675a79eeef055d7adcce797669cd8d0ee1f611c6973dcea9865403e076412b1. Feb 13 20:45:23.834472 containerd[1690]: time="2025-02-13T20:45:23.834427363Z" level=info msg="StartContainer for \"5675a79eeef055d7adcce797669cd8d0ee1f611c6973dcea9865403e076412b1\" returns successfully" Feb 13 20:45:23.846579 systemd[1]: cri-containerd-5675a79eeef055d7adcce797669cd8d0ee1f611c6973dcea9865403e076412b1.scope: Deactivated successfully. Feb 13 20:45:23.873216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5675a79eeef055d7adcce797669cd8d0ee1f611c6973dcea9865403e076412b1-rootfs.mount: Deactivated successfully. Feb 13 20:45:24.651028 kubelet[3212]: E0213 20:45:24.650025 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jn87" podUID="bdc0f384-e713-4490-b32f-30642a7169b0" Feb 13 20:45:25.157364 containerd[1690]: time="2025-02-13T20:45:25.157282183Z" level=info msg="shim disconnected" id=5675a79eeef055d7adcce797669cd8d0ee1f611c6973dcea9865403e076412b1 namespace=k8s.io Feb 13 20:45:25.157364 containerd[1690]: time="2025-02-13T20:45:25.157352285Z" level=warning msg="cleaning up after shim disconnected" id=5675a79eeef055d7adcce797669cd8d0ee1f611c6973dcea9865403e076412b1 namespace=k8s.io Feb 13 20:45:25.157364 containerd[1690]: time="2025-02-13T20:45:25.157363885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:45:25.741120 containerd[1690]: time="2025-02-13T20:45:25.741044701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:45:26.651403 kubelet[3212]: E0213 20:45:26.650415 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jn87" podUID="bdc0f384-e713-4490-b32f-30642a7169b0" Feb 13 20:45:28.650251 kubelet[3212]: E0213 20:45:28.650209 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jn87" podUID="bdc0f384-e713-4490-b32f-30642a7169b0" Feb 13 20:45:30.651920 kubelet[3212]: E0213 20:45:30.651770 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jn87" podUID="bdc0f384-e713-4490-b32f-30642a7169b0" Feb 13 20:45:31.173694 containerd[1690]: time="2025-02-13T20:45:31.173646271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:31.175673 containerd[1690]: time="2025-02-13T20:45:31.175616309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:45:31.179511 containerd[1690]: time="2025-02-13T20:45:31.179457983Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:31.183635 containerd[1690]: time="2025-02-13T20:45:31.183381057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:31.184364 containerd[1690]: time="2025-02-13T20:45:31.184331376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.443238473s" Feb 13 20:45:31.184585 containerd[1690]: time="2025-02-13T20:45:31.184479978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:45:31.186892 containerd[1690]: time="2025-02-13T20:45:31.186747422Z" level=info msg="CreateContainer within sandbox \"4afc222a27f6a461649cddeeca15d87703f0917b96b53d2614d7339ae2d7d3d2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:45:31.226518 containerd[1690]: time="2025-02-13T20:45:31.226377379Z" level=info msg="CreateContainer within sandbox \"4afc222a27f6a461649cddeeca15d87703f0917b96b53d2614d7339ae2d7d3d2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"01409222c9ffb351d2869c2d92111ae4f791f5ccd566626f3c549697d42af841\"" Feb 13 20:45:31.228702 containerd[1690]: time="2025-02-13T20:45:31.228533120Z" level=info msg="StartContainer for \"01409222c9ffb351d2869c2d92111ae4f791f5ccd566626f3c549697d42af841\"" Feb 13 20:45:31.264108 systemd[1]: Started cri-containerd-01409222c9ffb351d2869c2d92111ae4f791f5ccd566626f3c549697d42af841.scope - libcontainer container 01409222c9ffb351d2869c2d92111ae4f791f5ccd566626f3c549697d42af841. Feb 13 20:45:31.295026 containerd[1690]: time="2025-02-13T20:45:31.294151874Z" level=info msg="StartContainer for \"01409222c9ffb351d2869c2d92111ae4f791f5ccd566626f3c549697d42af841\" returns successfully" Feb 13 20:45:32.650766 kubelet[3212]: E0213 20:45:32.650359 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jn87" podUID="bdc0f384-e713-4490-b32f-30642a7169b0" Feb 13 20:45:32.675508 systemd[1]: cri-containerd-01409222c9ffb351d2869c2d92111ae4f791f5ccd566626f3c549697d42af841.scope: Deactivated successfully. Feb 13 20:45:32.676820 kubelet[3212]: I0213 20:45:32.676192 3212 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 20:45:32.705808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01409222c9ffb351d2869c2d92111ae4f791f5ccd566626f3c549697d42af841-rootfs.mount: Deactivated successfully. Feb 13 20:45:32.734472 kubelet[3212]: I0213 20:45:32.733551 3212 status_manager.go:890] "Failed to get status for pod" podUID="63e3fb52-bf2b-489c-b2b0-089fed67b060" pod="calico-apiserver/calico-apiserver-5cd94c5f6c-cq9bs" err="pods \"calico-apiserver-5cd94c5f6c-cq9bs\" is forbidden: User \"system:node:ci-4081.3.1-a-d679334e6e\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.1-a-d679334e6e' and this object" Feb 13 20:45:33.258830 kubelet[3212]: W0213 20:45:32.734846 3212 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.1-a-d679334e6e" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.1-a-d679334e6e' and this object Feb 13 20:45:33.258830 kubelet[3212]: E0213 20:45:32.734880 3212 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081.3.1-a-d679334e6e\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.1-a-d679334e6e' and this object" logger="UnhandledError" Feb 13 20:45:33.258830 kubelet[3212]: I0213 20:45:32.840440 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd165ac4-5280-463e-90d2-d1e413c8b382-config-volume\") pod \"coredns-668d6bf9bc-4th5s\" (UID: \"bd165ac4-5280-463e-90d2-d1e413c8b382\") " pod="kube-system/coredns-668d6bf9bc-4th5s" Feb 13 20:45:33.258830 kubelet[3212]: I0213 20:45:32.840492 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/63e3fb52-bf2b-489c-b2b0-089fed67b060-calico-apiserver-certs\") pod \"calico-apiserver-5cd94c5f6c-cq9bs\" (UID: \"63e3fb52-bf2b-489c-b2b0-089fed67b060\") " pod="calico-apiserver/calico-apiserver-5cd94c5f6c-cq9bs" Feb 13 20:45:33.258830 kubelet[3212]: I0213 20:45:32.840524 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mzt2\" (UniqueName: \"kubernetes.io/projected/9b078f36-321f-46d6-b74f-37d7f4d0e5a4-kube-api-access-4mzt2\") pod \"calico-apiserver-5cd94c5f6c-ffdzp\" (UID: \"9b078f36-321f-46d6-b74f-37d7f4d0e5a4\") " pod="calico-apiserver/calico-apiserver-5cd94c5f6c-ffdzp" Feb 13 20:45:32.742283 systemd[1]: Created slice kubepods-besteffort-pod63e3fb52_bf2b_489c_b2b0_089fed67b060.slice - libcontainer container kubepods-besteffort-pod63e3fb52_bf2b_489c_b2b0_089fed67b060.slice. Feb 13 20:45:33.259395 kubelet[3212]: I0213 20:45:32.840583 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9cjx\" (UniqueName: \"kubernetes.io/projected/bd165ac4-5280-463e-90d2-d1e413c8b382-kube-api-access-s9cjx\") pod \"coredns-668d6bf9bc-4th5s\" (UID: \"bd165ac4-5280-463e-90d2-d1e413c8b382\") " pod="kube-system/coredns-668d6bf9bc-4th5s" Feb 13 20:45:33.259395 kubelet[3212]: I0213 20:45:32.840608 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf-tigera-ca-bundle\") pod \"calico-kube-controllers-59fdbdb8b6-m5nsm\" (UID: \"99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf\") " pod="calico-system/calico-kube-controllers-59fdbdb8b6-m5nsm" Feb 13 20:45:33.259395 kubelet[3212]: I0213 20:45:32.840659 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv4x6\" (UniqueName: \"kubernetes.io/projected/99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf-kube-api-access-kv4x6\") pod \"calico-kube-controllers-59fdbdb8b6-m5nsm\" (UID: \"99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf\") " pod="calico-system/calico-kube-controllers-59fdbdb8b6-m5nsm" Feb 13 20:45:33.259395 kubelet[3212]: I0213 20:45:32.840695 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9b078f36-321f-46d6-b74f-37d7f4d0e5a4-calico-apiserver-certs\") pod \"calico-apiserver-5cd94c5f6c-ffdzp\" (UID: \"9b078f36-321f-46d6-b74f-37d7f4d0e5a4\") " pod="calico-apiserver/calico-apiserver-5cd94c5f6c-ffdzp" Feb 13 20:45:33.259395 kubelet[3212]: I0213 20:45:32.842404 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/960cb38c-4a08-4b6d-84ad-a76ffe60ddf8-config-volume\") pod \"coredns-668d6bf9bc-qb2g2\" (UID: \"960cb38c-4a08-4b6d-84ad-a76ffe60ddf8\") " pod="kube-system/coredns-668d6bf9bc-qb2g2" Feb 13 20:45:32.750219 systemd[1]: Created slice kubepods-burstable-pod960cb38c_4a08_4b6d_84ad_a76ffe60ddf8.slice - libcontainer container kubepods-burstable-pod960cb38c_4a08_4b6d_84ad_a76ffe60ddf8.slice. Feb 13 20:45:33.259769 kubelet[3212]: I0213 20:45:32.842462 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgmcz\" (UniqueName: \"kubernetes.io/projected/63e3fb52-bf2b-489c-b2b0-089fed67b060-kube-api-access-pgmcz\") pod \"calico-apiserver-5cd94c5f6c-cq9bs\" (UID: \"63e3fb52-bf2b-489c-b2b0-089fed67b060\") " pod="calico-apiserver/calico-apiserver-5cd94c5f6c-cq9bs" Feb 13 20:45:33.259769 kubelet[3212]: I0213 20:45:32.842499 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7ztj\" (UniqueName: \"kubernetes.io/projected/960cb38c-4a08-4b6d-84ad-a76ffe60ddf8-kube-api-access-h7ztj\") pod \"coredns-668d6bf9bc-qb2g2\" (UID: \"960cb38c-4a08-4b6d-84ad-a76ffe60ddf8\") " pod="kube-system/coredns-668d6bf9bc-qb2g2" Feb 13 20:45:32.764131 systemd[1]: Created slice kubepods-besteffort-pod9b078f36_321f_46d6_b74f_37d7f4d0e5a4.slice - libcontainer container kubepods-besteffort-pod9b078f36_321f_46d6_b74f_37d7f4d0e5a4.slice. Feb 13 20:45:32.769017 systemd[1]: Created slice kubepods-burstable-podbd165ac4_5280_463e_90d2_d1e413c8b382.slice - libcontainer container kubepods-burstable-podbd165ac4_5280_463e_90d2_d1e413c8b382.slice. Feb 13 20:45:32.777805 systemd[1]: Created slice kubepods-besteffort-pod99b825d8_e1ce_486c_8a9a_fdc5d65f5ebf.slice - libcontainer container kubepods-besteffort-pod99b825d8_e1ce_486c_8a9a_fdc5d65f5ebf.slice. Feb 13 20:45:33.569104 containerd[1690]: time="2025-02-13T20:45:33.569035533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4th5s,Uid:bd165ac4-5280-463e-90d2-d1e413c8b382,Namespace:kube-system,Attempt:0,}" Feb 13 20:45:33.572216 containerd[1690]: time="2025-02-13T20:45:33.572184593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59fdbdb8b6-m5nsm,Uid:99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf,Namespace:calico-system,Attempt:0,}" Feb 13 20:45:33.575016 containerd[1690]: time="2025-02-13T20:45:33.574988247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qb2g2,Uid:960cb38c-4a08-4b6d-84ad-a76ffe60ddf8,Namespace:kube-system,Attempt:0,}" Feb 13 20:45:33.858834 containerd[1690]: time="2025-02-13T20:45:33.858678866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd94c5f6c-cq9bs,Uid:63e3fb52-bf2b-489c-b2b0-089fed67b060,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:45:33.872812 containerd[1690]: time="2025-02-13T20:45:33.872773435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd94c5f6c-ffdzp,Uid:9b078f36-321f-46d6-b74f-37d7f4d0e5a4,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:45:34.360878 containerd[1690]: time="2025-02-13T20:45:34.360792758Z" level=info msg="shim disconnected" id=01409222c9ffb351d2869c2d92111ae4f791f5ccd566626f3c549697d42af841 namespace=k8s.io Feb 13 20:45:34.361066 containerd[1690]: time="2025-02-13T20:45:34.361006163Z" level=warning msg="cleaning up after shim disconnected" id=01409222c9ffb351d2869c2d92111ae4f791f5ccd566626f3c549697d42af841 namespace=k8s.io Feb 13 20:45:34.361066 containerd[1690]: time="2025-02-13T20:45:34.361045463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:45:34.606901 containerd[1690]: time="2025-02-13T20:45:34.606851259Z" level=error msg="Failed to destroy network for sandbox \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.608587 containerd[1690]: time="2025-02-13T20:45:34.608131384Z" level=error msg="encountered an error cleaning up failed sandbox \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.608587 containerd[1690]: time="2025-02-13T20:45:34.608196885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd94c5f6c-cq9bs,Uid:63e3fb52-bf2b-489c-b2b0-089fed67b060,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.608987 kubelet[3212]: E0213 20:45:34.608573 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.608987 kubelet[3212]: E0213 20:45:34.608653 3212 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cd94c5f6c-cq9bs" Feb 13 20:45:34.608987 kubelet[3212]: E0213 20:45:34.608711 3212 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cd94c5f6c-cq9bs" Feb 13 20:45:34.609864 kubelet[3212]: E0213 20:45:34.608776 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cd94c5f6c-cq9bs_calico-apiserver(63e3fb52-bf2b-489c-b2b0-089fed67b060)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cd94c5f6c-cq9bs_calico-apiserver(63e3fb52-bf2b-489c-b2b0-089fed67b060)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cd94c5f6c-cq9bs" podUID="63e3fb52-bf2b-489c-b2b0-089fed67b060" Feb 13 20:45:34.664452 systemd[1]: Created slice kubepods-besteffort-podbdc0f384_e713_4490_b32f_30642a7169b0.slice - libcontainer container kubepods-besteffort-podbdc0f384_e713_4490_b32f_30642a7169b0.slice. Feb 13 20:45:34.668350 containerd[1690]: time="2025-02-13T20:45:34.667862125Z" level=error msg="Failed to destroy network for sandbox \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.668350 containerd[1690]: time="2025-02-13T20:45:34.668241532Z" level=error msg="encountered an error cleaning up failed sandbox \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.668518 containerd[1690]: time="2025-02-13T20:45:34.668463936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4jn87,Uid:bdc0f384-e713-4490-b32f-30642a7169b0,Namespace:calico-system,Attempt:0,}" Feb 13 20:45:34.668998 containerd[1690]: time="2025-02-13T20:45:34.668862744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qb2g2,Uid:960cb38c-4a08-4b6d-84ad-a76ffe60ddf8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.669195 kubelet[3212]: E0213 20:45:34.669085 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.669195 kubelet[3212]: E0213 20:45:34.669138 3212 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qb2g2" Feb 13 20:45:34.669195 kubelet[3212]: E0213 20:45:34.669162 3212 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qb2g2" Feb 13 20:45:34.669618 kubelet[3212]: E0213 20:45:34.669206 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qb2g2_kube-system(960cb38c-4a08-4b6d-84ad-a76ffe60ddf8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qb2g2_kube-system(960cb38c-4a08-4b6d-84ad-a76ffe60ddf8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qb2g2" podUID="960cb38c-4a08-4b6d-84ad-a76ffe60ddf8" Feb 13 20:45:34.677510 containerd[1690]: time="2025-02-13T20:45:34.677007199Z" level=error msg="Failed to destroy network for sandbox \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.677510 containerd[1690]: time="2025-02-13T20:45:34.677304705Z" level=error msg="encountered an error cleaning up failed sandbox \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.677510 containerd[1690]: time="2025-02-13T20:45:34.677364406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd94c5f6c-ffdzp,Uid:9b078f36-321f-46d6-b74f-37d7f4d0e5a4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.677510 containerd[1690]: time="2025-02-13T20:45:34.677321905Z" level=error msg="Failed to destroy network for sandbox \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.677773 containerd[1690]: time="2025-02-13T20:45:34.677741113Z" level=error msg="encountered an error cleaning up failed sandbox \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.677837 containerd[1690]: time="2025-02-13T20:45:34.677811515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4th5s,Uid:bd165ac4-5280-463e-90d2-d1e413c8b382,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.678699 kubelet[3212]: E0213 20:45:34.677980 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.678699 kubelet[3212]: E0213 20:45:34.678030 3212 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4th5s" Feb 13 20:45:34.678699 kubelet[3212]: E0213 20:45:34.678055 3212 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4th5s" Feb 13 20:45:34.678889 kubelet[3212]: E0213 20:45:34.678093 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4th5s_kube-system(bd165ac4-5280-463e-90d2-d1e413c8b382)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4th5s_kube-system(bd165ac4-5280-463e-90d2-d1e413c8b382)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4th5s" podUID="bd165ac4-5280-463e-90d2-d1e413c8b382" Feb 13 20:45:34.678889 kubelet[3212]: E0213 20:45:34.678387 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.678889 kubelet[3212]: E0213 20:45:34.678425 3212 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cd94c5f6c-ffdzp" Feb 13 20:45:34.679535 kubelet[3212]: E0213 20:45:34.678450 3212 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cd94c5f6c-ffdzp" Feb 13 20:45:34.679535 kubelet[3212]: E0213 20:45:34.678489 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cd94c5f6c-ffdzp_calico-apiserver(9b078f36-321f-46d6-b74f-37d7f4d0e5a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cd94c5f6c-ffdzp_calico-apiserver(9b078f36-321f-46d6-b74f-37d7f4d0e5a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cd94c5f6c-ffdzp" podUID="9b078f36-321f-46d6-b74f-37d7f4d0e5a4" Feb 13 20:45:34.680863 containerd[1690]: time="2025-02-13T20:45:34.680830372Z" level=error msg="Failed to destroy network for sandbox \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.681171 containerd[1690]: time="2025-02-13T20:45:34.681134078Z" level=error msg="encountered an error cleaning up failed sandbox \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.681260 containerd[1690]: time="2025-02-13T20:45:34.681183379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59fdbdb8b6-m5nsm,Uid:99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.681385 kubelet[3212]: E0213 20:45:34.681339 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.681466 kubelet[3212]: E0213 20:45:34.681378 3212 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59fdbdb8b6-m5nsm" Feb 13 20:45:34.681466 kubelet[3212]: E0213 20:45:34.681400 3212 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59fdbdb8b6-m5nsm" Feb 13 20:45:34.681466 kubelet[3212]: E0213 20:45:34.681437 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59fdbdb8b6-m5nsm_calico-system(99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59fdbdb8b6-m5nsm_calico-system(99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59fdbdb8b6-m5nsm" podUID="99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf" Feb 13 20:45:34.756992 containerd[1690]: time="2025-02-13T20:45:34.756932226Z" level=error msg="Failed to destroy network for sandbox \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.757356 containerd[1690]: time="2025-02-13T20:45:34.757243632Z" level=error msg="encountered an error cleaning up failed sandbox \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.757356 containerd[1690]: time="2025-02-13T20:45:34.757319134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4jn87,Uid:bdc0f384-e713-4490-b32f-30642a7169b0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.757646 kubelet[3212]: E0213 20:45:34.757588 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.757736 kubelet[3212]: E0213 20:45:34.757670 3212 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4jn87" Feb 13 20:45:34.757736 kubelet[3212]: E0213 20:45:34.757695 3212 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4jn87" Feb 13 20:45:34.757825 kubelet[3212]: E0213 20:45:34.757743 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4jn87_calico-system(bdc0f384-e713-4490-b32f-30642a7169b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4jn87_calico-system(bdc0f384-e713-4490-b32f-30642a7169b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4jn87" podUID="bdc0f384-e713-4490-b32f-30642a7169b0" Feb 13 20:45:34.761472 kubelet[3212]: I0213 20:45:34.760757 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:45:34.761654 containerd[1690]: time="2025-02-13T20:45:34.761628216Z" level=info msg="StopPodSandbox for \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\"" Feb 13 20:45:34.761859 kubelet[3212]: I0213 20:45:34.761835 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:45:34.762181 containerd[1690]: time="2025-02-13T20:45:34.762140426Z" level=info msg="Ensure that sandbox 8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5 in task-service has been cleanup successfully" Feb 13 20:45:34.762575 containerd[1690]: time="2025-02-13T20:45:34.762328829Z" level=info msg="StopPodSandbox for \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\"" Feb 13 20:45:34.762575 containerd[1690]: time="2025-02-13T20:45:34.762490832Z" level=info msg="Ensure that sandbox 8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013 in task-service has been cleanup successfully" Feb 13 20:45:34.782379 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497-shm.mount: Deactivated successfully. Feb 13 20:45:34.783106 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24-shm.mount: Deactivated successfully. Feb 13 20:45:34.783366 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b-shm.mount: Deactivated successfully. Feb 13 20:45:34.786007 kubelet[3212]: I0213 20:45:34.785302 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:45:34.788996 containerd[1690]: time="2025-02-13T20:45:34.787374908Z" level=info msg="StopPodSandbox for \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\"" Feb 13 20:45:34.788996 containerd[1690]: time="2025-02-13T20:45:34.787946219Z" level=info msg="Ensure that sandbox be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b in task-service has been cleanup successfully" Feb 13 20:45:34.792484 kubelet[3212]: I0213 20:45:34.792461 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:45:34.793840 containerd[1690]: time="2025-02-13T20:45:34.793809431Z" level=info msg="StopPodSandbox for \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\"" Feb 13 20:45:34.794858 containerd[1690]: time="2025-02-13T20:45:34.794012035Z" level=info msg="Ensure that sandbox db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205 in task-service has been cleanup successfully" Feb 13 20:45:34.797937 kubelet[3212]: I0213 20:45:34.797918 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:45:34.798481 containerd[1690]: time="2025-02-13T20:45:34.798442119Z" level=info msg="StopPodSandbox for \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\"" Feb 13 20:45:34.798654 containerd[1690]: time="2025-02-13T20:45:34.798623023Z" level=info msg="Ensure that sandbox 2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24 in task-service has been cleanup successfully" Feb 13 20:45:34.811695 containerd[1690]: time="2025-02-13T20:45:34.811661672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:45:34.815869 kubelet[3212]: I0213 20:45:34.815835 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:45:34.817215 containerd[1690]: time="2025-02-13T20:45:34.817037175Z" level=info msg="StopPodSandbox for \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\"" Feb 13 20:45:34.817669 containerd[1690]: time="2025-02-13T20:45:34.817434482Z" level=info msg="Ensure that sandbox 7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497 in task-service has been cleanup successfully" Feb 13 20:45:34.884477 containerd[1690]: time="2025-02-13T20:45:34.884414162Z" level=error msg="StopPodSandbox for \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\" failed" error="failed to destroy network for sandbox \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.886261 kubelet[3212]: E0213 20:45:34.886002 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:45:34.886261 kubelet[3212]: E0213 20:45:34.886093 3212 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5"} Feb 13 20:45:34.886261 kubelet[3212]: E0213 20:45:34.886186 3212 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bdc0f384-e713-4490-b32f-30642a7169b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:45:34.886261 kubelet[3212]: E0213 20:45:34.886219 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bdc0f384-e713-4490-b32f-30642a7169b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4jn87" podUID="bdc0f384-e713-4490-b32f-30642a7169b0" Feb 13 20:45:34.909285 containerd[1690]: time="2025-02-13T20:45:34.909150634Z" level=error msg="StopPodSandbox for \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\" failed" error="failed to destroy network for sandbox \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.909491 kubelet[3212]: E0213 20:45:34.909412 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:45:34.909491 kubelet[3212]: E0213 20:45:34.909470 3212 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205"} Feb 13 20:45:34.909737 kubelet[3212]: E0213 20:45:34.909513 3212 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"960cb38c-4a08-4b6d-84ad-a76ffe60ddf8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:45:34.909737 kubelet[3212]: E0213 20:45:34.909543 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"960cb38c-4a08-4b6d-84ad-a76ffe60ddf8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qb2g2" podUID="960cb38c-4a08-4b6d-84ad-a76ffe60ddf8" Feb 13 20:45:34.926881 containerd[1690]: time="2025-02-13T20:45:34.926674369Z" level=error msg="StopPodSandbox for \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\" failed" error="failed to destroy network for sandbox \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.927354 kubelet[3212]: E0213 20:45:34.926915 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:45:34.927354 kubelet[3212]: E0213 20:45:34.926987 3212 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013"} Feb 13 20:45:34.927354 kubelet[3212]: E0213 20:45:34.927028 3212 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b078f36-321f-46d6-b74f-37d7f4d0e5a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:45:34.927354 kubelet[3212]: E0213 20:45:34.927059 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b078f36-321f-46d6-b74f-37d7f4d0e5a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cd94c5f6c-ffdzp" podUID="9b078f36-321f-46d6-b74f-37d7f4d0e5a4" Feb 13 20:45:34.930121 containerd[1690]: time="2025-02-13T20:45:34.930081634Z" level=error msg="StopPodSandbox for \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\" failed" error="failed to destroy network for sandbox \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.930860 kubelet[3212]: E0213 20:45:34.930824 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:45:34.931045 kubelet[3212]: E0213 20:45:34.930870 3212 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b"} Feb 13 20:45:34.931045 kubelet[3212]: E0213 20:45:34.930904 3212 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63e3fb52-bf2b-489c-b2b0-089fed67b060\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:45:34.931045 kubelet[3212]: E0213 20:45:34.930931 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63e3fb52-bf2b-489c-b2b0-089fed67b060\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cd94c5f6c-cq9bs" podUID="63e3fb52-bf2b-489c-b2b0-089fed67b060" Feb 13 20:45:34.934395 containerd[1690]: time="2025-02-13T20:45:34.934347816Z" level=error msg="StopPodSandbox for \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\" failed" error="failed to destroy network for sandbox \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.934813 kubelet[3212]: E0213 20:45:34.934644 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:45:34.934813 kubelet[3212]: E0213 20:45:34.934795 3212 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24"} Feb 13 20:45:34.935229 kubelet[3212]: E0213 20:45:34.934831 3212 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd165ac4-5280-463e-90d2-d1e413c8b382\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:45:34.935229 kubelet[3212]: E0213 20:45:34.934856 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd165ac4-5280-463e-90d2-d1e413c8b382\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4th5s" podUID="bd165ac4-5280-463e-90d2-d1e413c8b382" Feb 13 20:45:34.939066 containerd[1690]: time="2025-02-13T20:45:34.939032405Z" level=error msg="StopPodSandbox for \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\" failed" error="failed to destroy network for sandbox \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:45:34.939216 kubelet[3212]: E0213 20:45:34.939175 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:45:34.939216 kubelet[3212]: E0213 20:45:34.939210 3212 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497"} Feb 13 20:45:34.939334 kubelet[3212]: E0213 20:45:34.939244 3212 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:45:34.939334 kubelet[3212]: E0213 20:45:34.939268 3212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59fdbdb8b6-m5nsm" podUID="99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf" Feb 13 20:45:43.159822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2663942983.mount: Deactivated successfully. Feb 13 20:45:43.206242 containerd[1690]: time="2025-02-13T20:45:43.206182194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:43.208476 containerd[1690]: time="2025-02-13T20:45:43.208414835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:45:43.212425 containerd[1690]: time="2025-02-13T20:45:43.212376608Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:43.215817 containerd[1690]: time="2025-02-13T20:45:43.215768071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:43.216768 containerd[1690]: time="2025-02-13T20:45:43.216322481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.404614609s" Feb 13 20:45:43.216768 containerd[1690]: time="2025-02-13T20:45:43.216362082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:45:43.231207 containerd[1690]: time="2025-02-13T20:45:43.231043553Z" level=info msg="CreateContainer within sandbox \"4afc222a27f6a461649cddeeca15d87703f0917b96b53d2614d7339ae2d7d3d2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:45:43.268296 containerd[1690]: time="2025-02-13T20:45:43.268244539Z" level=info msg="CreateContainer within sandbox \"4afc222a27f6a461649cddeeca15d87703f0917b96b53d2614d7339ae2d7d3d2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3dd6555d139c7eb1d73a017e1bb9605de582df3773430604108e804c35cca2d9\"" Feb 13 20:45:43.268994 containerd[1690]: time="2025-02-13T20:45:43.268812350Z" level=info msg="StartContainer for \"3dd6555d139c7eb1d73a017e1bb9605de582df3773430604108e804c35cca2d9\"" Feb 13 20:45:43.298143 systemd[1]: Started cri-containerd-3dd6555d139c7eb1d73a017e1bb9605de582df3773430604108e804c35cca2d9.scope - libcontainer container 3dd6555d139c7eb1d73a017e1bb9605de582df3773430604108e804c35cca2d9. Feb 13 20:45:43.326363 containerd[1690]: time="2025-02-13T20:45:43.326314411Z" level=info msg="StartContainer for \"3dd6555d139c7eb1d73a017e1bb9605de582df3773430604108e804c35cca2d9\" returns successfully" Feb 13 20:45:43.700332 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:45:43.700544 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:45:43.868722 kubelet[3212]: I0213 20:45:43.867599 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tdtrv" podStartSLOduration=2.011481047 podStartE2EDuration="24.867578127s" podCreationTimestamp="2025-02-13 20:45:19 +0000 UTC" firstStartedPulling="2025-02-13 20:45:20.361068817 +0000 UTC m=+14.368220787" lastFinishedPulling="2025-02-13 20:45:43.217165897 +0000 UTC m=+37.224317867" observedRunningTime="2025-02-13 20:45:43.867486725 +0000 UTC m=+37.874638695" watchObservedRunningTime="2025-02-13 20:45:43.867578127 +0000 UTC m=+37.874730097" Feb 13 20:45:45.352075 kernel: bpftool[4522]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:45:45.633616 systemd-networkd[1448]: vxlan.calico: Link UP Feb 13 20:45:45.633625 systemd-networkd[1448]: vxlan.calico: Gained carrier Feb 13 20:45:46.652629 containerd[1690]: time="2025-02-13T20:45:46.652579270Z" level=info msg="StopPodSandbox for \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\"" Feb 13 20:45:46.738824 containerd[1690]: 2025-02-13 20:45:46.700 [INFO][4610] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:45:46.738824 containerd[1690]: 2025-02-13 20:45:46.700 [INFO][4610] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" iface="eth0" netns="/var/run/netns/cni-055e5b84-77c5-ef87-7a5d-698de445fa48" Feb 13 20:45:46.738824 containerd[1690]: 2025-02-13 20:45:46.700 [INFO][4610] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" iface="eth0" netns="/var/run/netns/cni-055e5b84-77c5-ef87-7a5d-698de445fa48" Feb 13 20:45:46.738824 containerd[1690]: 2025-02-13 20:45:46.700 [INFO][4610] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" iface="eth0" netns="/var/run/netns/cni-055e5b84-77c5-ef87-7a5d-698de445fa48" Feb 13 20:45:46.738824 containerd[1690]: 2025-02-13 20:45:46.700 [INFO][4610] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:45:46.738824 containerd[1690]: 2025-02-13 20:45:46.700 [INFO][4610] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:45:46.738824 containerd[1690]: 2025-02-13 20:45:46.725 [INFO][4617] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" HandleID="k8s-pod-network.8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Workload="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:45:46.738824 containerd[1690]: 2025-02-13 20:45:46.726 [INFO][4617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:45:46.738824 containerd[1690]: 2025-02-13 20:45:46.726 [INFO][4617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:45:46.738824 containerd[1690]: 2025-02-13 20:45:46.733 [WARNING][4617] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" HandleID="k8s-pod-network.8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Workload="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:45:46.738824 containerd[1690]: 2025-02-13 20:45:46.733 [INFO][4617] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" HandleID="k8s-pod-network.8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Workload="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:45:46.738824 containerd[1690]: 2025-02-13 20:45:46.735 [INFO][4617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:45:46.738824 containerd[1690]: 2025-02-13 20:45:46.737 [INFO][4610] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:45:46.741109 containerd[1690]: time="2025-02-13T20:45:46.741054088Z" level=info msg="TearDown network for sandbox \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\" successfully" Feb 13 20:45:46.741109 containerd[1690]: time="2025-02-13T20:45:46.741095589Z" level=info msg="StopPodSandbox for \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\" returns successfully" Feb 13 20:45:46.742155 containerd[1690]: time="2025-02-13T20:45:46.741772701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4jn87,Uid:bdc0f384-e713-4490-b32f-30642a7169b0,Namespace:calico-system,Attempt:1,}" Feb 13 20:45:46.743647 systemd[1]: run-netns-cni\x2d055e5b84\x2d77c5\x2def87\x2d7a5d\x2d698de445fa48.mount: Deactivated successfully. Feb 13 20:45:46.884766 systemd-networkd[1448]: cali5f2e90379bb: Link UP Feb 13 20:45:46.885860 systemd-networkd[1448]: cali5f2e90379bb: Gained carrier Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.818 [INFO][4623] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0 csi-node-driver- calico-system bdc0f384-e713-4490-b32f-30642a7169b0 744 0 2025-02-13 20:45:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.1-a-d679334e6e csi-node-driver-4jn87 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5f2e90379bb [] []}} ContainerID="161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" Namespace="calico-system" Pod="csi-node-driver-4jn87" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-" Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.818 [INFO][4623] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" Namespace="calico-system" Pod="csi-node-driver-4jn87" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.844 [INFO][4634] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" HandleID="k8s-pod-network.161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" Workload="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.853 [INFO][4634] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" HandleID="k8s-pod-network.161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" Workload="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-d679334e6e", "pod":"csi-node-driver-4jn87", "timestamp":"2025-02-13 20:45:46.84448998 +0000 UTC"}, Hostname:"ci-4081.3.1-a-d679334e6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.853 [INFO][4634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.853 [INFO][4634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.853 [INFO][4634] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-d679334e6e' Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.855 [INFO][4634] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.858 [INFO][4634] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.861 [INFO][4634] ipam/ipam.go 489: Trying affinity for 192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.863 [INFO][4634] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.864 [INFO][4634] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.864 [INFO][4634] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.866 [INFO][4634] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.871 [INFO][4634] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.879 [INFO][4634] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.1/26] block=192.168.126.0/26 handle="k8s-pod-network.161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.879 [INFO][4634] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.1/26] handle="k8s-pod-network.161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.879 [INFO][4634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:45:46.905501 containerd[1690]: 2025-02-13 20:45:46.879 [INFO][4634] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.1/26] IPv6=[] ContainerID="161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" HandleID="k8s-pod-network.161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" Workload="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:45:46.908586 containerd[1690]: 2025-02-13 20:45:46.881 [INFO][4623] cni-plugin/k8s.go 386: Populated endpoint ContainerID="161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" Namespace="calico-system" Pod="csi-node-driver-4jn87" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bdc0f384-e713-4490-b32f-30642a7169b0", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"", Pod:"csi-node-driver-4jn87", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5f2e90379bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:45:46.908586 containerd[1690]: 2025-02-13 20:45:46.881 [INFO][4623] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.1/32] ContainerID="161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" Namespace="calico-system" Pod="csi-node-driver-4jn87" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:45:46.908586 containerd[1690]: 2025-02-13 20:45:46.881 [INFO][4623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f2e90379bb ContainerID="161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" Namespace="calico-system" Pod="csi-node-driver-4jn87" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:45:46.908586 containerd[1690]: 2025-02-13 20:45:46.884 [INFO][4623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" Namespace="calico-system" Pod="csi-node-driver-4jn87" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:45:46.908586 containerd[1690]: 2025-02-13 20:45:46.884 [INFO][4623] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" Namespace="calico-system" Pod="csi-node-driver-4jn87" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bdc0f384-e713-4490-b32f-30642a7169b0", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd", Pod:"csi-node-driver-4jn87", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5f2e90379bb", MAC:"d2:00:2e:e0:ad:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:45:46.908586 containerd[1690]: 2025-02-13 20:45:46.901 [INFO][4623] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd" Namespace="calico-system" Pod="csi-node-driver-4jn87" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:45:46.934978 containerd[1690]: time="2025-02-13T20:45:46.934794432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:46.934978 containerd[1690]: time="2025-02-13T20:45:46.934854633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:46.934978 containerd[1690]: time="2025-02-13T20:45:46.934876034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:46.935820 containerd[1690]: time="2025-02-13T20:45:46.935579747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:46.960478 systemd[1]: Started cri-containerd-161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd.scope - libcontainer container 161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd. Feb 13 20:45:46.981464 containerd[1690]: time="2025-02-13T20:45:46.981303483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4jn87,Uid:bdc0f384-e713-4490-b32f-30642a7169b0,Namespace:calico-system,Attempt:1,} returns sandbox id \"161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd\"" Feb 13 20:45:46.983629 containerd[1690]: time="2025-02-13T20:45:46.983511123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:45:47.155204 systemd-networkd[1448]: vxlan.calico: Gained IPv6LL Feb 13 20:45:47.651424 containerd[1690]: time="2025-02-13T20:45:47.651378340Z" level=info msg="StopPodSandbox for \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\"" Feb 13 20:45:47.651813 containerd[1690]: time="2025-02-13T20:45:47.651780947Z" level=info msg="StopPodSandbox for \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\"" Feb 13 20:45:47.655066 containerd[1690]: time="2025-02-13T20:45:47.654940705Z" level=info msg="StopPodSandbox for \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\"" Feb 13 20:45:47.801237 containerd[1690]: 2025-02-13 20:45:47.742 [INFO][4737] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:45:47.801237 containerd[1690]: 2025-02-13 20:45:47.742 [INFO][4737] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" iface="eth0" netns="/var/run/netns/cni-c0858e83-f279-e319-165c-ef5eb5fcd6b6" Feb 13 20:45:47.801237 containerd[1690]: 2025-02-13 20:45:47.743 [INFO][4737] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" iface="eth0" netns="/var/run/netns/cni-c0858e83-f279-e319-165c-ef5eb5fcd6b6" Feb 13 20:45:47.801237 containerd[1690]: 2025-02-13 20:45:47.743 [INFO][4737] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" iface="eth0" netns="/var/run/netns/cni-c0858e83-f279-e319-165c-ef5eb5fcd6b6" Feb 13 20:45:47.801237 containerd[1690]: 2025-02-13 20:45:47.743 [INFO][4737] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:45:47.801237 containerd[1690]: 2025-02-13 20:45:47.743 [INFO][4737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:45:47.801237 containerd[1690]: 2025-02-13 20:45:47.776 [INFO][4755] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" HandleID="k8s-pod-network.8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:45:47.801237 containerd[1690]: 2025-02-13 20:45:47.777 [INFO][4755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:45:47.801237 containerd[1690]: 2025-02-13 20:45:47.777 [INFO][4755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:45:47.801237 containerd[1690]: 2025-02-13 20:45:47.791 [WARNING][4755] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" HandleID="k8s-pod-network.8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:45:47.801237 containerd[1690]: 2025-02-13 20:45:47.791 [INFO][4755] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" HandleID="k8s-pod-network.8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:45:47.801237 containerd[1690]: 2025-02-13 20:45:47.794 [INFO][4755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:45:47.801237 containerd[1690]: 2025-02-13 20:45:47.796 [INFO][4737] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:45:47.804005 systemd[1]: run-netns-cni\x2dc0858e83\x2df279\x2de319\x2d165c\x2def5eb5fcd6b6.mount: Deactivated successfully. Feb 13 20:45:47.808202 containerd[1690]: time="2025-02-13T20:45:47.804082033Z" level=info msg="TearDown network for sandbox \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\" successfully" Feb 13 20:45:47.808202 containerd[1690]: time="2025-02-13T20:45:47.804113634Z" level=info msg="StopPodSandbox for \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\" returns successfully" Feb 13 20:45:47.808202 containerd[1690]: time="2025-02-13T20:45:47.807804301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd94c5f6c-ffdzp,Uid:9b078f36-321f-46d6-b74f-37d7f4d0e5a4,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:45:47.828174 containerd[1690]: 2025-02-13 20:45:47.767 [INFO][4733] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:45:47.828174 containerd[1690]: 2025-02-13 20:45:47.768 [INFO][4733] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" iface="eth0" netns="/var/run/netns/cni-ae7e232e-b99d-d48b-235c-1b899fa6b169" Feb 13 20:45:47.828174 containerd[1690]: 2025-02-13 20:45:47.769 [INFO][4733] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" iface="eth0" netns="/var/run/netns/cni-ae7e232e-b99d-d48b-235c-1b899fa6b169" Feb 13 20:45:47.828174 containerd[1690]: 2025-02-13 20:45:47.769 [INFO][4733] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" iface="eth0" netns="/var/run/netns/cni-ae7e232e-b99d-d48b-235c-1b899fa6b169" Feb 13 20:45:47.828174 containerd[1690]: 2025-02-13 20:45:47.769 [INFO][4733] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:45:47.828174 containerd[1690]: 2025-02-13 20:45:47.769 [INFO][4733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:45:47.828174 containerd[1690]: 2025-02-13 20:45:47.816 [INFO][4762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" HandleID="k8s-pod-network.2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:45:47.828174 containerd[1690]: 2025-02-13 20:45:47.816 [INFO][4762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:45:47.828174 containerd[1690]: 2025-02-13 20:45:47.816 [INFO][4762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:45:47.828174 containerd[1690]: 2025-02-13 20:45:47.823 [WARNING][4762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" HandleID="k8s-pod-network.2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:45:47.828174 containerd[1690]: 2025-02-13 20:45:47.824 [INFO][4762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" HandleID="k8s-pod-network.2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:45:47.828174 containerd[1690]: 2025-02-13 20:45:47.825 [INFO][4762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:45:47.828174 containerd[1690]: 2025-02-13 20:45:47.826 [INFO][4733] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:45:47.829457 containerd[1690]: time="2025-02-13T20:45:47.829038290Z" level=info msg="TearDown network for sandbox \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\" successfully" Feb 13 20:45:47.829457 containerd[1690]: time="2025-02-13T20:45:47.829084590Z" level=info msg="StopPodSandbox for \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\" returns successfully" Feb 13 20:45:47.831792 containerd[1690]: time="2025-02-13T20:45:47.831459134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4th5s,Uid:bd165ac4-5280-463e-90d2-d1e413c8b382,Namespace:kube-system,Attempt:1,}" Feb 13 20:45:47.834839 systemd[1]: run-netns-cni\x2dae7e232e\x2db99d\x2dd48b\x2d235c\x2d1b899fa6b169.mount: Deactivated successfully. Feb 13 20:45:47.841167 containerd[1690]: 2025-02-13 20:45:47.771 [INFO][4738] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:45:47.841167 containerd[1690]: 2025-02-13 20:45:47.771 [INFO][4738] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" iface="eth0" netns="/var/run/netns/cni-a5448437-5622-5670-1200-5e9813c52b7b" Feb 13 20:45:47.841167 containerd[1690]: 2025-02-13 20:45:47.771 [INFO][4738] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" iface="eth0" netns="/var/run/netns/cni-a5448437-5622-5670-1200-5e9813c52b7b" Feb 13 20:45:47.841167 containerd[1690]: 2025-02-13 20:45:47.772 [INFO][4738] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" iface="eth0" netns="/var/run/netns/cni-a5448437-5622-5670-1200-5e9813c52b7b" Feb 13 20:45:47.841167 containerd[1690]: 2025-02-13 20:45:47.772 [INFO][4738] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:45:47.841167 containerd[1690]: 2025-02-13 20:45:47.772 [INFO][4738] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:45:47.841167 containerd[1690]: 2025-02-13 20:45:47.825 [INFO][4763] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" HandleID="k8s-pod-network.7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:45:47.841167 containerd[1690]: 2025-02-13 20:45:47.825 [INFO][4763] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:45:47.841167 containerd[1690]: 2025-02-13 20:45:47.825 [INFO][4763] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:45:47.841167 containerd[1690]: 2025-02-13 20:45:47.835 [WARNING][4763] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" HandleID="k8s-pod-network.7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:45:47.841167 containerd[1690]: 2025-02-13 20:45:47.836 [INFO][4763] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" HandleID="k8s-pod-network.7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:45:47.841167 containerd[1690]: 2025-02-13 20:45:47.838 [INFO][4763] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:45:47.841167 containerd[1690]: 2025-02-13 20:45:47.840 [INFO][4738] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:45:47.841706 containerd[1690]: time="2025-02-13T20:45:47.841315414Z" level=info msg="TearDown network for sandbox \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\" successfully" Feb 13 20:45:47.841706 containerd[1690]: time="2025-02-13T20:45:47.841340015Z" level=info msg="StopPodSandbox for \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\" returns successfully" Feb 13 20:45:47.842364 containerd[1690]: time="2025-02-13T20:45:47.841998127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59fdbdb8b6-m5nsm,Uid:99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf,Namespace:calico-system,Attempt:1,}" Feb 13 20:45:47.845491 systemd[1]: run-netns-cni\x2da5448437\x2d5622\x2d5670\x2d1200\x2d5e9813c52b7b.mount: Deactivated successfully. Feb 13 20:45:48.023849 systemd-networkd[1448]: calidc85708b8f8: Link UP Feb 13 20:45:48.028007 systemd-networkd[1448]: calidc85708b8f8: Gained carrier Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.906 [INFO][4775] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0 calico-apiserver-5cd94c5f6c- calico-apiserver 9b078f36-321f-46d6-b74f-37d7f4d0e5a4 754 0 2025-02-13 20:45:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cd94c5f6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-d679334e6e calico-apiserver-5cd94c5f6c-ffdzp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidc85708b8f8 [] []}} ContainerID="c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-ffdzp" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-" Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.906 [INFO][4775] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-ffdzp" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.949 [INFO][4788] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" HandleID="k8s-pod-network.c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.960 [INFO][4788] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" HandleID="k8s-pod-network.c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003357e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-d679334e6e", "pod":"calico-apiserver-5cd94c5f6c-ffdzp", "timestamp":"2025-02-13 20:45:47.949222588 +0000 UTC"}, Hostname:"ci-4081.3.1-a-d679334e6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.960 [INFO][4788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.961 [INFO][4788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.961 [INFO][4788] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-d679334e6e' Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.964 [INFO][4788] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.976 [INFO][4788] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.984 [INFO][4788] ipam/ipam.go 489: Trying affinity for 192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.987 [INFO][4788] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.992 [INFO][4788] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.993 [INFO][4788] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:47.996 [INFO][4788] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0 Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:48.006 [INFO][4788] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:48.014 [INFO][4788] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.2/26] block=192.168.126.0/26 handle="k8s-pod-network.c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:48.014 [INFO][4788] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.2/26] handle="k8s-pod-network.c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:48.014 [INFO][4788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:45:48.064724 containerd[1690]: 2025-02-13 20:45:48.014 [INFO][4788] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.2/26] IPv6=[] ContainerID="c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" HandleID="k8s-pod-network.c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:45:48.067559 containerd[1690]: 2025-02-13 20:45:48.017 [INFO][4775] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-ffdzp" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0", GenerateName:"calico-apiserver-5cd94c5f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b078f36-321f-46d6-b74f-37d7f4d0e5a4", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd94c5f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"", Pod:"calico-apiserver-5cd94c5f6c-ffdzp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc85708b8f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:45:48.067559 containerd[1690]: 2025-02-13 20:45:48.018 [INFO][4775] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.2/32] ContainerID="c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-ffdzp" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:45:48.067559 containerd[1690]: 2025-02-13 20:45:48.018 [INFO][4775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc85708b8f8 ContainerID="c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-ffdzp" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:45:48.067559 containerd[1690]: 2025-02-13 20:45:48.030 [INFO][4775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-ffdzp" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:45:48.067559 containerd[1690]: 2025-02-13 20:45:48.032 [INFO][4775] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-ffdzp" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0", GenerateName:"calico-apiserver-5cd94c5f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b078f36-321f-46d6-b74f-37d7f4d0e5a4", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd94c5f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0", Pod:"calico-apiserver-5cd94c5f6c-ffdzp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc85708b8f8", MAC:"4a:e9:b6:75:38:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:45:48.067559 containerd[1690]: 2025-02-13 20:45:48.054 [INFO][4775] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-ffdzp" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:45:48.120728 containerd[1690]: time="2025-02-13T20:45:48.119552504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:48.120728 containerd[1690]: time="2025-02-13T20:45:48.120484221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:48.120728 containerd[1690]: time="2025-02-13T20:45:48.120505021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:48.120728 containerd[1690]: time="2025-02-13T20:45:48.120610623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:48.139078 systemd-networkd[1448]: cali6ef6ead4beb: Link UP Feb 13 20:45:48.140267 systemd-networkd[1448]: cali6ef6ead4beb: Gained carrier Feb 13 20:45:48.150134 systemd[1]: Started cri-containerd-c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0.scope - libcontainer container c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0. Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:47.991 [INFO][4792] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0 calico-kube-controllers-59fdbdb8b6- calico-system 99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf 756 0 2025-02-13 20:45:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59fdbdb8b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.1-a-d679334e6e calico-kube-controllers-59fdbdb8b6-m5nsm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6ef6ead4beb [] []}} ContainerID="a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" Namespace="calico-system" Pod="calico-kube-controllers-59fdbdb8b6-m5nsm" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-" Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:47.991 [INFO][4792] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" Namespace="calico-system" Pod="calico-kube-controllers-59fdbdb8b6-m5nsm" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.063 [INFO][4822] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" HandleID="k8s-pod-network.a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.086 [INFO][4822] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" HandleID="k8s-pod-network.a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-d679334e6e", "pod":"calico-kube-controllers-59fdbdb8b6-m5nsm", "timestamp":"2025-02-13 20:45:48.062843966 +0000 UTC"}, Hostname:"ci-4081.3.1-a-d679334e6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.086 [INFO][4822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.086 [INFO][4822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.086 [INFO][4822] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-d679334e6e' Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.096 [INFO][4822] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.102 [INFO][4822] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.107 [INFO][4822] ipam/ipam.go 489: Trying affinity for 192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.109 [INFO][4822] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.112 [INFO][4822] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.112 [INFO][4822] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.114 [INFO][4822] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.119 [INFO][4822] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.130 [INFO][4822] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.3/26] block=192.168.126.0/26 handle="k8s-pod-network.a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.130 [INFO][4822] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.3/26] handle="k8s-pod-network.a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.130 [INFO][4822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:45:48.163656 containerd[1690]: 2025-02-13 20:45:48.130 [INFO][4822] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.3/26] IPv6=[] ContainerID="a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" HandleID="k8s-pod-network.a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:45:48.165118 containerd[1690]: 2025-02-13 20:45:48.133 [INFO][4792] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" Namespace="calico-system" Pod="calico-kube-controllers-59fdbdb8b6-m5nsm" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0", GenerateName:"calico-kube-controllers-59fdbdb8b6-", Namespace:"calico-system", SelfLink:"", UID:"99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59fdbdb8b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"", Pod:"calico-kube-controllers-59fdbdb8b6-m5nsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ef6ead4beb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:45:48.165118 containerd[1690]: 2025-02-13 20:45:48.134 [INFO][4792] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.3/32] ContainerID="a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" Namespace="calico-system" Pod="calico-kube-controllers-59fdbdb8b6-m5nsm" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:45:48.165118 containerd[1690]: 2025-02-13 20:45:48.135 [INFO][4792] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ef6ead4beb ContainerID="a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" Namespace="calico-system" Pod="calico-kube-controllers-59fdbdb8b6-m5nsm" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:45:48.165118 containerd[1690]: 2025-02-13 20:45:48.140 [INFO][4792] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" Namespace="calico-system" Pod="calico-kube-controllers-59fdbdb8b6-m5nsm" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:45:48.165118 containerd[1690]: 2025-02-13 20:45:48.141 [INFO][4792] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" Namespace="calico-system" Pod="calico-kube-controllers-59fdbdb8b6-m5nsm" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0", GenerateName:"calico-kube-controllers-59fdbdb8b6-", Namespace:"calico-system", SelfLink:"", UID:"99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59fdbdb8b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe", Pod:"calico-kube-controllers-59fdbdb8b6-m5nsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ef6ead4beb", MAC:"42:cd:b8:d2:22:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:45:48.165118 containerd[1690]: 2025-02-13 20:45:48.160 [INFO][4792] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe" Namespace="calico-system" Pod="calico-kube-controllers-59fdbdb8b6-m5nsm" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:45:48.203976 containerd[1690]: time="2025-02-13T20:45:48.203870546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:48.204275 containerd[1690]: time="2025-02-13T20:45:48.203928547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:48.204275 containerd[1690]: time="2025-02-13T20:45:48.203970048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:48.204275 containerd[1690]: time="2025-02-13T20:45:48.204057749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:48.230193 systemd[1]: Started cri-containerd-a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe.scope - libcontainer container a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe. Feb 13 20:45:48.254936 systemd-networkd[1448]: calib1bc2aed7f7: Link UP Feb 13 20:45:48.255169 systemd-networkd[1448]: calib1bc2aed7f7: Gained carrier Feb 13 20:45:48.267906 containerd[1690]: time="2025-02-13T20:45:48.266892799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd94c5f6c-ffdzp,Uid:9b078f36-321f-46d6-b74f-37d7f4d0e5a4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0\"" Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:47.988 [INFO][4787] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0 coredns-668d6bf9bc- kube-system bd165ac4-5280-463e-90d2-d1e413c8b382 755 0 2025-02-13 20:45:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-d679334e6e coredns-668d6bf9bc-4th5s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib1bc2aed7f7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4th5s" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-" Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:47.988 [INFO][4787] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4th5s" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.084 [INFO][4818] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" HandleID="k8s-pod-network.f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.097 [INFO][4818] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" HandleID="k8s-pod-network.f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f0fa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-d679334e6e", "pod":"coredns-668d6bf9bc-4th5s", "timestamp":"2025-02-13 20:45:48.084620465 +0000 UTC"}, Hostname:"ci-4081.3.1-a-d679334e6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.098 [INFO][4818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.132 [INFO][4818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.132 [INFO][4818] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-d679334e6e' Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.194 [INFO][4818] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.201 [INFO][4818] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.209 [INFO][4818] ipam/ipam.go 489: Trying affinity for 192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.212 [INFO][4818] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.215 [INFO][4818] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.215 [INFO][4818] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.218 [INFO][4818] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6 Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.226 [INFO][4818] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.243 [INFO][4818] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.4/26] block=192.168.126.0/26 handle="k8s-pod-network.f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.243 [INFO][4818] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.4/26] handle="k8s-pod-network.f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.243 [INFO][4818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:45:48.277840 containerd[1690]: 2025-02-13 20:45:48.243 [INFO][4818] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.4/26] IPv6=[] ContainerID="f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" HandleID="k8s-pod-network.f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:45:48.278810 containerd[1690]: 2025-02-13 20:45:48.247 [INFO][4787] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4th5s" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd165ac4-5280-463e-90d2-d1e413c8b382", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"", Pod:"coredns-668d6bf9bc-4th5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1bc2aed7f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:45:48.278810 containerd[1690]: 2025-02-13 20:45:48.248 [INFO][4787] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.4/32] ContainerID="f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4th5s" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:45:48.278810 containerd[1690]: 2025-02-13 20:45:48.248 [INFO][4787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1bc2aed7f7 ContainerID="f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4th5s" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:45:48.278810 containerd[1690]: 2025-02-13 20:45:48.252 [INFO][4787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4th5s" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:45:48.278810 containerd[1690]: 2025-02-13 20:45:48.253 [INFO][4787] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4th5s" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd165ac4-5280-463e-90d2-d1e413c8b382", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6", Pod:"coredns-668d6bf9bc-4th5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1bc2aed7f7", MAC:"2a:29:1a:15:c2:14", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:45:48.278810 containerd[1690]: 2025-02-13 20:45:48.274 [INFO][4787] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4th5s" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:45:48.316983 containerd[1690]: time="2025-02-13T20:45:48.315670891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:48.316983 containerd[1690]: time="2025-02-13T20:45:48.315731192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:48.316983 containerd[1690]: time="2025-02-13T20:45:48.315766993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:48.316983 containerd[1690]: time="2025-02-13T20:45:48.315869295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:48.342276 systemd[1]: Started cri-containerd-f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6.scope - libcontainer container f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6. Feb 13 20:45:48.362400 containerd[1690]: time="2025-02-13T20:45:48.360185305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59fdbdb8b6-m5nsm,Uid:99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf,Namespace:calico-system,Attempt:1,} returns sandbox id \"a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe\"" Feb 13 20:45:48.413807 containerd[1690]: time="2025-02-13T20:45:48.413768285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4th5s,Uid:bd165ac4-5280-463e-90d2-d1e413c8b382,Namespace:kube-system,Attempt:1,} returns sandbox id \"f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6\"" Feb 13 20:45:48.416778 containerd[1690]: time="2025-02-13T20:45:48.416652538Z" level=info msg="CreateContainer within sandbox \"f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:45:48.481150 containerd[1690]: time="2025-02-13T20:45:48.481111617Z" level=info msg="CreateContainer within sandbox \"f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a384b8f738545ed19b7c3ea834e3d087eed20139a1ecc5d61e23e72005f71cf\"" Feb 13 20:45:48.482203 containerd[1690]: time="2025-02-13T20:45:48.482155636Z" level=info msg="StartContainer for \"5a384b8f738545ed19b7c3ea834e3d087eed20139a1ecc5d61e23e72005f71cf\"" Feb 13 20:45:48.499477 systemd-networkd[1448]: cali5f2e90379bb: Gained IPv6LL Feb 13 20:45:48.522150 systemd[1]: Started cri-containerd-5a384b8f738545ed19b7c3ea834e3d087eed20139a1ecc5d61e23e72005f71cf.scope - libcontainer container 5a384b8f738545ed19b7c3ea834e3d087eed20139a1ecc5d61e23e72005f71cf. Feb 13 20:45:48.564662 containerd[1690]: time="2025-02-13T20:45:48.564528943Z" level=info msg="StartContainer for \"5a384b8f738545ed19b7c3ea834e3d087eed20139a1ecc5d61e23e72005f71cf\" returns successfully" Feb 13 20:45:48.652885 containerd[1690]: time="2025-02-13T20:45:48.651383732Z" level=info msg="StopPodSandbox for \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\"" Feb 13 20:45:48.709283 containerd[1690]: time="2025-02-13T20:45:48.709236490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:48.713097 containerd[1690]: time="2025-02-13T20:45:48.713051260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:45:48.715724 containerd[1690]: time="2025-02-13T20:45:48.715683008Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:48.722766 containerd[1690]: time="2025-02-13T20:45:48.722540733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:48.724322 containerd[1690]: time="2025-02-13T20:45:48.723196845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.73956982s" Feb 13 20:45:48.724322 containerd[1690]: time="2025-02-13T20:45:48.723233146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:45:48.728362 containerd[1690]: time="2025-02-13T20:45:48.728043034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:45:48.737322 containerd[1690]: time="2025-02-13T20:45:48.737276403Z" level=info msg="CreateContainer within sandbox \"161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:45:48.794418 containerd[1690]: time="2025-02-13T20:45:48.794369647Z" level=info msg="CreateContainer within sandbox \"161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c7a2e63e4482e65810362ea011293835579b156e207a9880fc76a457b62bcffb\"" Feb 13 20:45:48.796395 containerd[1690]: time="2025-02-13T20:45:48.796351484Z" level=info msg="StartContainer for \"c7a2e63e4482e65810362ea011293835579b156e207a9880fc76a457b62bcffb\"" Feb 13 20:45:48.843517 containerd[1690]: 2025-02-13 20:45:48.728 [INFO][5050] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:45:48.843517 containerd[1690]: 2025-02-13 20:45:48.729 [INFO][5050] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" iface="eth0" netns="/var/run/netns/cni-c77bf1d0-809d-f686-d046-2b6c32daeb20" Feb 13 20:45:48.843517 containerd[1690]: 2025-02-13 20:45:48.729 [INFO][5050] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" iface="eth0" netns="/var/run/netns/cni-c77bf1d0-809d-f686-d046-2b6c32daeb20" Feb 13 20:45:48.843517 containerd[1690]: 2025-02-13 20:45:48.732 [INFO][5050] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" iface="eth0" netns="/var/run/netns/cni-c77bf1d0-809d-f686-d046-2b6c32daeb20" Feb 13 20:45:48.843517 containerd[1690]: 2025-02-13 20:45:48.732 [INFO][5050] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:45:48.843517 containerd[1690]: 2025-02-13 20:45:48.732 [INFO][5050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:45:48.843517 containerd[1690]: 2025-02-13 20:45:48.795 [INFO][5056] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" HandleID="k8s-pod-network.be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:45:48.843517 containerd[1690]: 2025-02-13 20:45:48.796 [INFO][5056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:45:48.843517 containerd[1690]: 2025-02-13 20:45:48.796 [INFO][5056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:45:48.843517 containerd[1690]: 2025-02-13 20:45:48.829 [WARNING][5056] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" HandleID="k8s-pod-network.be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:45:48.843517 containerd[1690]: 2025-02-13 20:45:48.829 [INFO][5056] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" HandleID="k8s-pod-network.be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:45:48.843517 containerd[1690]: 2025-02-13 20:45:48.836 [INFO][5056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:45:48.843517 containerd[1690]: 2025-02-13 20:45:48.838 [INFO][5050] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:45:48.845256 containerd[1690]: time="2025-02-13T20:45:48.845036274Z" level=info msg="TearDown network for sandbox \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\" successfully" Feb 13 20:45:48.845256 containerd[1690]: time="2025-02-13T20:45:48.845199277Z" level=info msg="StopPodSandbox for \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\" returns successfully" Feb 13 20:45:48.849281 containerd[1690]: time="2025-02-13T20:45:48.849049748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd94c5f6c-cq9bs,Uid:63e3fb52-bf2b-489c-b2b0-089fed67b060,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:45:48.852509 systemd[1]: run-netns-cni\x2dc77bf1d0\x2d809d\x2df686\x2dd046\x2d2b6c32daeb20.mount: Deactivated successfully. Feb 13 20:45:48.889126 systemd[1]: run-containerd-runc-k8s.io-c7a2e63e4482e65810362ea011293835579b156e207a9880fc76a457b62bcffb-runc.0EuYlp.mount: Deactivated successfully. Feb 13 20:45:48.895143 systemd[1]: Started cri-containerd-c7a2e63e4482e65810362ea011293835579b156e207a9880fc76a457b62bcffb.scope - libcontainer container c7a2e63e4482e65810362ea011293835579b156e207a9880fc76a457b62bcffb. Feb 13 20:45:48.942711 kubelet[3212]: I0213 20:45:48.941164 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4th5s" podStartSLOduration=36.941141932 podStartE2EDuration="36.941141932s" podCreationTimestamp="2025-02-13 20:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:48.905829486 +0000 UTC m=+42.912981556" watchObservedRunningTime="2025-02-13 20:45:48.941141932 +0000 UTC m=+42.948294002" Feb 13 20:45:48.985556 containerd[1690]: time="2025-02-13T20:45:48.984771630Z" level=info msg="StartContainer for \"c7a2e63e4482e65810362ea011293835579b156e207a9880fc76a457b62bcffb\" returns successfully" Feb 13 20:45:49.067629 systemd-networkd[1448]: cali03f2219244d: Link UP Feb 13 20:45:49.068251 systemd-networkd[1448]: cali03f2219244d: Gained carrier Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:48.998 [INFO][5087] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0 calico-apiserver-5cd94c5f6c- calico-apiserver 63e3fb52-bf2b-489c-b2b0-089fed67b060 774 0 2025-02-13 20:45:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cd94c5f6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-d679334e6e calico-apiserver-5cd94c5f6c-cq9bs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali03f2219244d [] []}} ContainerID="895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-cq9bs" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-" Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:48.998 [INFO][5087] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-cq9bs" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.026 [INFO][5110] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" HandleID="k8s-pod-network.895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.035 [INFO][5110] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" HandleID="k8s-pod-network.895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319410), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-d679334e6e", "pod":"calico-apiserver-5cd94c5f6c-cq9bs", "timestamp":"2025-02-13 20:45:49.026283089 +0000 UTC"}, Hostname:"ci-4081.3.1-a-d679334e6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.035 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.035 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.035 [INFO][5110] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-d679334e6e' Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.038 [INFO][5110] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.041 [INFO][5110] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.044 [INFO][5110] ipam/ipam.go 489: Trying affinity for 192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.046 [INFO][5110] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.048 [INFO][5110] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.048 [INFO][5110] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.049 [INFO][5110] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5 Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.053 [INFO][5110] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.062 [INFO][5110] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.5/26] block=192.168.126.0/26 handle="k8s-pod-network.895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.062 [INFO][5110] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.5/26] handle="k8s-pod-network.895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.062 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:45:49.086088 containerd[1690]: 2025-02-13 20:45:49.062 [INFO][5110] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.5/26] IPv6=[] ContainerID="895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" HandleID="k8s-pod-network.895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:45:49.087011 containerd[1690]: 2025-02-13 20:45:49.064 [INFO][5087] cni-plugin/k8s.go 386: Populated endpoint ContainerID="895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-cq9bs" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0", GenerateName:"calico-apiserver-5cd94c5f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"63e3fb52-bf2b-489c-b2b0-089fed67b060", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd94c5f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"", Pod:"calico-apiserver-5cd94c5f6c-cq9bs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03f2219244d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:45:49.087011 containerd[1690]: 2025-02-13 20:45:49.064 [INFO][5087] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.5/32] ContainerID="895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-cq9bs" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:45:49.087011 containerd[1690]: 2025-02-13 20:45:49.064 [INFO][5087] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03f2219244d ContainerID="895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-cq9bs" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:45:49.087011 containerd[1690]: 2025-02-13 20:45:49.067 [INFO][5087] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-cq9bs" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:45:49.087011 containerd[1690]: 2025-02-13 20:45:49.068 [INFO][5087] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-cq9bs" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0", GenerateName:"calico-apiserver-5cd94c5f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"63e3fb52-bf2b-489c-b2b0-089fed67b060", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd94c5f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5", Pod:"calico-apiserver-5cd94c5f6c-cq9bs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03f2219244d", MAC:"d2:cd:fe:1c:eb:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:45:49.087011 containerd[1690]: 2025-02-13 20:45:49.081 [INFO][5087] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5" Namespace="calico-apiserver" Pod="calico-apiserver-5cd94c5f6c-cq9bs" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:45:49.117103 containerd[1690]: time="2025-02-13T20:45:49.116440939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:49.117103 containerd[1690]: time="2025-02-13T20:45:49.116588741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:49.117103 containerd[1690]: time="2025-02-13T20:45:49.116627942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:49.117103 containerd[1690]: time="2025-02-13T20:45:49.116767845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:49.137149 systemd[1]: Started cri-containerd-895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5.scope - libcontainer container 895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5. Feb 13 20:45:49.185356 containerd[1690]: time="2025-02-13T20:45:49.185311798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd94c5f6c-cq9bs,Uid:63e3fb52-bf2b-489c-b2b0-089fed67b060,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5\"" Feb 13 20:45:49.203108 systemd-networkd[1448]: cali6ef6ead4beb: Gained IPv6LL Feb 13 20:45:49.267154 systemd-networkd[1448]: calib1bc2aed7f7: Gained IPv6LL Feb 13 20:45:49.395141 systemd-networkd[1448]: calidc85708b8f8: Gained IPv6LL Feb 13 20:45:49.651641 containerd[1690]: time="2025-02-13T20:45:49.651113419Z" level=info msg="StopPodSandbox for \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\"" Feb 13 20:45:49.737084 containerd[1690]: 2025-02-13 20:45:49.700 [INFO][5183] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:45:49.737084 containerd[1690]: 2025-02-13 20:45:49.701 [INFO][5183] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" iface="eth0" netns="/var/run/netns/cni-8cff719a-6017-c6a0-a14d-c9191278b45a" Feb 13 20:45:49.737084 containerd[1690]: 2025-02-13 20:45:49.701 [INFO][5183] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" iface="eth0" netns="/var/run/netns/cni-8cff719a-6017-c6a0-a14d-c9191278b45a" Feb 13 20:45:49.737084 containerd[1690]: 2025-02-13 20:45:49.702 [INFO][5183] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" iface="eth0" netns="/var/run/netns/cni-8cff719a-6017-c6a0-a14d-c9191278b45a" Feb 13 20:45:49.737084 containerd[1690]: 2025-02-13 20:45:49.702 [INFO][5183] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:45:49.737084 containerd[1690]: 2025-02-13 20:45:49.702 [INFO][5183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:45:49.737084 containerd[1690]: 2025-02-13 20:45:49.726 [INFO][5189] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" HandleID="k8s-pod-network.db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:45:49.737084 containerd[1690]: 2025-02-13 20:45:49.726 [INFO][5189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:45:49.737084 containerd[1690]: 2025-02-13 20:45:49.726 [INFO][5189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:45:49.737084 containerd[1690]: 2025-02-13 20:45:49.732 [WARNING][5189] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" HandleID="k8s-pod-network.db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:45:49.737084 containerd[1690]: 2025-02-13 20:45:49.733 [INFO][5189] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" HandleID="k8s-pod-network.db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:45:49.737084 containerd[1690]: 2025-02-13 20:45:49.734 [INFO][5189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:45:49.737084 containerd[1690]: 2025-02-13 20:45:49.735 [INFO][5183] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:45:49.737986 containerd[1690]: time="2025-02-13T20:45:49.737299895Z" level=info msg="TearDown network for sandbox \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\" successfully" Feb 13 20:45:49.737986 containerd[1690]: time="2025-02-13T20:45:49.737348296Z" level=info msg="StopPodSandbox for \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\" returns successfully" Feb 13 20:45:49.738371 containerd[1690]: time="2025-02-13T20:45:49.738296113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qb2g2,Uid:960cb38c-4a08-4b6d-84ad-a76ffe60ddf8,Namespace:kube-system,Attempt:1,}" Feb 13 20:45:49.808876 systemd[1]: run-netns-cni\x2d8cff719a\x2d6017\x2dc6a0\x2da14d\x2dc9191278b45a.mount: Deactivated successfully. Feb 13 20:45:49.874936 systemd-networkd[1448]: cali0106f0d5148: Link UP Feb 13 20:45:49.876138 systemd-networkd[1448]: cali0106f0d5148: Gained carrier Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.798 [INFO][5196] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0 coredns-668d6bf9bc- kube-system 960cb38c-4a08-4b6d-84ad-a76ffe60ddf8 793 0 2025-02-13 20:45:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-d679334e6e coredns-668d6bf9bc-qb2g2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0106f0d5148 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" Namespace="kube-system" Pod="coredns-668d6bf9bc-qb2g2" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-" Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.798 [INFO][5196] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" Namespace="kube-system" Pod="coredns-668d6bf9bc-qb2g2" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.833 [INFO][5207] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" HandleID="k8s-pod-network.61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.841 [INFO][5207] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" HandleID="k8s-pod-network.61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039a990), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-d679334e6e", "pod":"coredns-668d6bf9bc-qb2g2", "timestamp":"2025-02-13 20:45:49.833524955 +0000 UTC"}, Hostname:"ci-4081.3.1-a-d679334e6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.841 [INFO][5207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.841 [INFO][5207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.841 [INFO][5207] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-d679334e6e' Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.843 [INFO][5207] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.846 [INFO][5207] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.850 [INFO][5207] ipam/ipam.go 489: Trying affinity for 192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.853 [INFO][5207] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.855 [INFO][5207] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.855 [INFO][5207] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.856 [INFO][5207] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.860 [INFO][5207] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.869 [INFO][5207] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.6/26] block=192.168.126.0/26 handle="k8s-pod-network.61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.869 [INFO][5207] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.6/26] handle="k8s-pod-network.61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" host="ci-4081.3.1-a-d679334e6e" Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.869 [INFO][5207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:45:49.898923 containerd[1690]: 2025-02-13 20:45:49.869 [INFO][5207] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.6/26] IPv6=[] ContainerID="61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" HandleID="k8s-pod-network.61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:45:49.901153 containerd[1690]: 2025-02-13 20:45:49.871 [INFO][5196] cni-plugin/k8s.go 386: Populated endpoint ContainerID="61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" Namespace="kube-system" Pod="coredns-668d6bf9bc-qb2g2" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"960cb38c-4a08-4b6d-84ad-a76ffe60ddf8", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"", Pod:"coredns-668d6bf9bc-qb2g2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0106f0d5148", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:45:49.901153 containerd[1690]: 2025-02-13 20:45:49.871 [INFO][5196] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.6/32] ContainerID="61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" Namespace="kube-system" Pod="coredns-668d6bf9bc-qb2g2" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:45:49.901153 containerd[1690]: 2025-02-13 20:45:49.872 [INFO][5196] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0106f0d5148 ContainerID="61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" Namespace="kube-system" Pod="coredns-668d6bf9bc-qb2g2" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:45:49.901153 containerd[1690]: 2025-02-13 20:45:49.876 [INFO][5196] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" Namespace="kube-system" Pod="coredns-668d6bf9bc-qb2g2" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:45:49.901153 containerd[1690]: 2025-02-13 20:45:49.877 [INFO][5196] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" Namespace="kube-system" Pod="coredns-668d6bf9bc-qb2g2" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"960cb38c-4a08-4b6d-84ad-a76ffe60ddf8", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c", Pod:"coredns-668d6bf9bc-qb2g2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0106f0d5148", MAC:"72:38:15:d4:ec:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:45:49.901153 containerd[1690]: 2025-02-13 20:45:49.896 [INFO][5196] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c" Namespace="kube-system" Pod="coredns-668d6bf9bc-qb2g2" WorkloadEndpoint="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:45:49.933107 containerd[1690]: time="2025-02-13T20:45:49.931211342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:49.933268 containerd[1690]: time="2025-02-13T20:45:49.931284044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:49.933268 containerd[1690]: time="2025-02-13T20:45:49.931317144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:49.933268 containerd[1690]: time="2025-02-13T20:45:49.931544848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:49.986416 systemd[1]: Started cri-containerd-61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c.scope - libcontainer container 61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c. Feb 13 20:45:50.050533 containerd[1690]: time="2025-02-13T20:45:50.050499224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qb2g2,Uid:960cb38c-4a08-4b6d-84ad-a76ffe60ddf8,Namespace:kube-system,Attempt:1,} returns sandbox id \"61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c\"" Feb 13 20:45:50.053682 containerd[1690]: time="2025-02-13T20:45:50.053527680Z" level=info msg="CreateContainer within sandbox \"61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:45:50.090787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount793027599.mount: Deactivated successfully. Feb 13 20:45:50.095623 containerd[1690]: time="2025-02-13T20:45:50.095541148Z" level=info msg="CreateContainer within sandbox \"61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11e0542f0ec697343d09c8ba32393a44445428463e27037473f516ad35a6347a\"" Feb 13 20:45:50.096563 containerd[1690]: time="2025-02-13T20:45:50.096468765Z" level=info msg="StartContainer for \"11e0542f0ec697343d09c8ba32393a44445428463e27037473f516ad35a6347a\"" Feb 13 20:45:50.124240 systemd[1]: Started cri-containerd-11e0542f0ec697343d09c8ba32393a44445428463e27037473f516ad35a6347a.scope - libcontainer container 11e0542f0ec697343d09c8ba32393a44445428463e27037473f516ad35a6347a. Feb 13 20:45:50.155276 containerd[1690]: time="2025-02-13T20:45:50.155190539Z" level=info msg="StartContainer for \"11e0542f0ec697343d09c8ba32393a44445428463e27037473f516ad35a6347a\" returns successfully" Feb 13 20:45:50.291270 systemd-networkd[1448]: cali03f2219244d: Gained IPv6LL Feb 13 20:45:50.944299 kubelet[3212]: I0213 20:45:50.943573 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qb2g2" podStartSLOduration=38.94355346 podStartE2EDuration="38.94355346s" podCreationTimestamp="2025-02-13 20:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:50.923202288 +0000 UTC m=+44.930354258" watchObservedRunningTime="2025-02-13 20:45:50.94355346 +0000 UTC m=+44.950705430" Feb 13 20:45:50.995132 systemd-networkd[1448]: cali0106f0d5148: Gained IPv6LL Feb 13 20:45:52.000353 containerd[1690]: time="2025-02-13T20:45:52.000297365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:52.002830 containerd[1690]: time="2025-02-13T20:45:52.002699709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:45:52.006783 containerd[1690]: time="2025-02-13T20:45:52.006728983Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:52.011135 containerd[1690]: time="2025-02-13T20:45:52.011084462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:52.011909 containerd[1690]: time="2025-02-13T20:45:52.011766575Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.281026191s" Feb 13 20:45:52.011909 containerd[1690]: time="2025-02-13T20:45:52.011805575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:45:52.014134 containerd[1690]: time="2025-02-13T20:45:52.013752911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:45:52.014702 containerd[1690]: time="2025-02-13T20:45:52.014673628Z" level=info msg="CreateContainer within sandbox \"c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:45:52.053204 containerd[1690]: time="2025-02-13T20:45:52.053166230Z" level=info msg="CreateContainer within sandbox \"c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"64b579b6e05c6d51b394be5685253ee12f6c0aa9e4c75250639005a7708043af\"" Feb 13 20:45:52.053634 containerd[1690]: time="2025-02-13T20:45:52.053607638Z" level=info msg="StartContainer for \"64b579b6e05c6d51b394be5685253ee12f6c0aa9e4c75250639005a7708043af\"" Feb 13 20:45:52.090147 systemd[1]: Started cri-containerd-64b579b6e05c6d51b394be5685253ee12f6c0aa9e4c75250639005a7708043af.scope - libcontainer container 64b579b6e05c6d51b394be5685253ee12f6c0aa9e4c75250639005a7708043af. Feb 13 20:45:52.133147 containerd[1690]: time="2025-02-13T20:45:52.133094389Z" level=info msg="StartContainer for \"64b579b6e05c6d51b394be5685253ee12f6c0aa9e4c75250639005a7708043af\" returns successfully" Feb 13 20:45:52.928615 kubelet[3212]: I0213 20:45:52.928541 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cd94c5f6c-ffdzp" podStartSLOduration=30.185944171 podStartE2EDuration="33.928520805s" podCreationTimestamp="2025-02-13 20:45:19 +0000 UTC" firstStartedPulling="2025-02-13 20:45:48.270350862 +0000 UTC m=+42.277502832" lastFinishedPulling="2025-02-13 20:45:52.012927496 +0000 UTC m=+46.020079466" observedRunningTime="2025-02-13 20:45:52.927483686 +0000 UTC m=+46.934635656" watchObservedRunningTime="2025-02-13 20:45:52.928520805 +0000 UTC m=+46.935672775" Feb 13 20:45:53.913258 kubelet[3212]: I0213 20:45:53.913223 3212 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:45:54.875386 containerd[1690]: time="2025-02-13T20:45:54.875335832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:54.877286 containerd[1690]: time="2025-02-13T20:45:54.877229067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:45:54.881106 containerd[1690]: time="2025-02-13T20:45:54.881044736Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:54.886377 containerd[1690]: time="2025-02-13T20:45:54.886317333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:54.887342 containerd[1690]: time="2025-02-13T20:45:54.886917144Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.873127432s" Feb 13 20:45:54.887342 containerd[1690]: time="2025-02-13T20:45:54.886979545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:45:54.888105 containerd[1690]: time="2025-02-13T20:45:54.888073265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:45:54.897589 containerd[1690]: time="2025-02-13T20:45:54.897000828Z" level=info msg="CreateContainer within sandbox \"a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:45:54.936112 containerd[1690]: time="2025-02-13T20:45:54.936066841Z" level=info msg="CreateContainer within sandbox \"a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7c52480ad37dfa20c9f8c27b3c464b376886aaa9772d61eb1bd9285a721b4695\"" Feb 13 20:45:54.936668 containerd[1690]: time="2025-02-13T20:45:54.936623951Z" level=info msg="StartContainer for \"7c52480ad37dfa20c9f8c27b3c464b376886aaa9772d61eb1bd9285a721b4695\"" Feb 13 20:45:54.978126 systemd[1]: Started cri-containerd-7c52480ad37dfa20c9f8c27b3c464b376886aaa9772d61eb1bd9285a721b4695.scope - libcontainer container 7c52480ad37dfa20c9f8c27b3c464b376886aaa9772d61eb1bd9285a721b4695. Feb 13 20:45:55.021613 containerd[1690]: time="2025-02-13T20:45:55.021441099Z" level=info msg="StartContainer for \"7c52480ad37dfa20c9f8c27b3c464b376886aaa9772d61eb1bd9285a721b4695\" returns successfully" Feb 13 20:45:55.942928 kubelet[3212]: I0213 20:45:55.942593 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59fdbdb8b6-m5nsm" podStartSLOduration=30.419964445 podStartE2EDuration="36.942569108s" podCreationTimestamp="2025-02-13 20:45:19 +0000 UTC" firstStartedPulling="2025-02-13 20:45:48.365293699 +0000 UTC m=+42.372445669" lastFinishedPulling="2025-02-13 20:45:54.887898262 +0000 UTC m=+48.895050332" observedRunningTime="2025-02-13 20:45:55.941246984 +0000 UTC m=+49.948399054" watchObservedRunningTime="2025-02-13 20:45:55.942569108 +0000 UTC m=+49.949721278" Feb 13 20:45:56.643462 containerd[1690]: time="2025-02-13T20:45:56.643407698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:56.645369 containerd[1690]: time="2025-02-13T20:45:56.645297033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:45:56.651456 containerd[1690]: time="2025-02-13T20:45:56.651420444Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:56.658751 containerd[1690]: time="2025-02-13T20:45:56.657649358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:56.658751 containerd[1690]: time="2025-02-13T20:45:56.658580075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.77046551s" Feb 13 20:45:56.658751 containerd[1690]: time="2025-02-13T20:45:56.658614476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:45:56.662997 containerd[1690]: time="2025-02-13T20:45:56.662948655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:45:56.664989 containerd[1690]: time="2025-02-13T20:45:56.664895890Z" level=info msg="CreateContainer within sandbox \"161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:45:56.697390 containerd[1690]: time="2025-02-13T20:45:56.697315782Z" level=info msg="CreateContainer within sandbox \"161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7e2802b58edcaab58165cd0494945d815463e37f9735fc79081e6ff625b5b070\"" Feb 13 20:45:56.698102 containerd[1690]: time="2025-02-13T20:45:56.697877892Z" level=info msg="StartContainer for \"7e2802b58edcaab58165cd0494945d815463e37f9735fc79081e6ff625b5b070\"" Feb 13 20:45:56.748127 systemd[1]: Started cri-containerd-7e2802b58edcaab58165cd0494945d815463e37f9735fc79081e6ff625b5b070.scope - libcontainer container 7e2802b58edcaab58165cd0494945d815463e37f9735fc79081e6ff625b5b070. Feb 13 20:45:56.777802 containerd[1690]: time="2025-02-13T20:45:56.777748350Z" level=info msg="StartContainer for \"7e2802b58edcaab58165cd0494945d815463e37f9735fc79081e6ff625b5b070\" returns successfully" Feb 13 20:45:57.038136 containerd[1690]: time="2025-02-13T20:45:57.038091201Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:57.040028 containerd[1690]: time="2025-02-13T20:45:57.039952635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:45:57.041885 containerd[1690]: time="2025-02-13T20:45:57.041854969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 378.625309ms" Feb 13 20:45:57.041993 containerd[1690]: time="2025-02-13T20:45:57.041887870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:45:57.044309 containerd[1690]: time="2025-02-13T20:45:57.044037209Z" level=info msg="CreateContainer within sandbox \"895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:45:57.075715 containerd[1690]: time="2025-02-13T20:45:57.075669086Z" level=info msg="CreateContainer within sandbox \"895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b7b992ecfdaa0430f7b80b5cad136f1c10ea78e7fba7986548fdf081b6b43505\"" Feb 13 20:45:57.077575 containerd[1690]: time="2025-02-13T20:45:57.076356599Z" level=info msg="StartContainer for \"b7b992ecfdaa0430f7b80b5cad136f1c10ea78e7fba7986548fdf081b6b43505\"" Feb 13 20:45:57.120115 systemd[1]: Started cri-containerd-b7b992ecfdaa0430f7b80b5cad136f1c10ea78e7fba7986548fdf081b6b43505.scope - libcontainer container b7b992ecfdaa0430f7b80b5cad136f1c10ea78e7fba7986548fdf081b6b43505. Feb 13 20:45:57.162582 containerd[1690]: time="2025-02-13T20:45:57.162532572Z" level=info msg="StartContainer for \"b7b992ecfdaa0430f7b80b5cad136f1c10ea78e7fba7986548fdf081b6b43505\" returns successfully" Feb 13 20:45:57.750543 kubelet[3212]: I0213 20:45:57.750470 3212 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:45:57.750543 kubelet[3212]: I0213 20:45:57.750503 3212 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:45:57.952473 kubelet[3212]: I0213 20:45:57.952410 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4jn87" podStartSLOduration=29.272957755 podStartE2EDuration="38.952378886s" podCreationTimestamp="2025-02-13 20:45:19 +0000 UTC" firstStartedPulling="2025-02-13 20:45:46.982646908 +0000 UTC m=+40.989798878" lastFinishedPulling="2025-02-13 20:45:56.662068039 +0000 UTC m=+50.669220009" observedRunningTime="2025-02-13 20:45:56.942366454 +0000 UTC m=+50.949518524" watchObservedRunningTime="2025-02-13 20:45:57.952378886 +0000 UTC m=+51.959530856" Feb 13 20:45:58.933421 kubelet[3212]: I0213 20:45:58.933385 3212 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:46:06.649346 containerd[1690]: time="2025-02-13T20:46:06.649303339Z" level=info msg="StopPodSandbox for \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\"" Feb 13 20:46:06.716126 containerd[1690]: 2025-02-13 20:46:06.685 [WARNING][5542] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"960cb38c-4a08-4b6d-84ad-a76ffe60ddf8", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c", Pod:"coredns-668d6bf9bc-qb2g2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0106f0d5148", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:06.716126 containerd[1690]: 2025-02-13 20:46:06.686 [INFO][5542] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:46:06.716126 containerd[1690]: 2025-02-13 20:46:06.686 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" iface="eth0" netns="" Feb 13 20:46:06.716126 containerd[1690]: 2025-02-13 20:46:06.686 [INFO][5542] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:46:06.716126 containerd[1690]: 2025-02-13 20:46:06.686 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:46:06.716126 containerd[1690]: 2025-02-13 20:46:06.704 [INFO][5549] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" HandleID="k8s-pod-network.db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:46:06.716126 containerd[1690]: 2025-02-13 20:46:06.704 [INFO][5549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:06.716126 containerd[1690]: 2025-02-13 20:46:06.704 [INFO][5549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:06.716126 containerd[1690]: 2025-02-13 20:46:06.711 [WARNING][5549] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" HandleID="k8s-pod-network.db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:46:06.716126 containerd[1690]: 2025-02-13 20:46:06.711 [INFO][5549] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" HandleID="k8s-pod-network.db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:46:06.716126 containerd[1690]: 2025-02-13 20:46:06.713 [INFO][5549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:06.716126 containerd[1690]: 2025-02-13 20:46:06.715 [INFO][5542] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:46:06.716891 containerd[1690]: time="2025-02-13T20:46:06.716171472Z" level=info msg="TearDown network for sandbox \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\" successfully" Feb 13 20:46:06.716891 containerd[1690]: time="2025-02-13T20:46:06.716201073Z" level=info msg="StopPodSandbox for \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\" returns successfully" Feb 13 20:46:06.716891 containerd[1690]: time="2025-02-13T20:46:06.716810684Z" level=info msg="RemovePodSandbox for \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\"" Feb 13 20:46:06.716891 containerd[1690]: time="2025-02-13T20:46:06.716850185Z" level=info msg="Forcibly stopping sandbox \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\"" Feb 13 20:46:06.783853 containerd[1690]: 2025-02-13 20:46:06.755 [WARNING][5567] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"960cb38c-4a08-4b6d-84ad-a76ffe60ddf8", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"61f4a1f903fb6a376d1f566edfbaceb89832245dbbec9d0cad8769ff57c8637c", Pod:"coredns-668d6bf9bc-qb2g2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0106f0d5148", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:06.783853 containerd[1690]: 2025-02-13 20:46:06.755 [INFO][5567] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:46:06.783853 containerd[1690]: 2025-02-13 20:46:06.755 [INFO][5567] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" iface="eth0" netns="" Feb 13 20:46:06.783853 containerd[1690]: 2025-02-13 20:46:06.755 [INFO][5567] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:46:06.783853 containerd[1690]: 2025-02-13 20:46:06.755 [INFO][5567] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:46:06.783853 containerd[1690]: 2025-02-13 20:46:06.773 [INFO][5573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" HandleID="k8s-pod-network.db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:46:06.783853 containerd[1690]: 2025-02-13 20:46:06.773 [INFO][5573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:06.783853 containerd[1690]: 2025-02-13 20:46:06.773 [INFO][5573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:06.783853 containerd[1690]: 2025-02-13 20:46:06.780 [WARNING][5573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" HandleID="k8s-pod-network.db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:46:06.783853 containerd[1690]: 2025-02-13 20:46:06.780 [INFO][5573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" HandleID="k8s-pod-network.db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--qb2g2-eth0" Feb 13 20:46:06.783853 containerd[1690]: 2025-02-13 20:46:06.782 [INFO][5573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:06.783853 containerd[1690]: 2025-02-13 20:46:06.782 [INFO][5567] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205" Feb 13 20:46:06.784587 containerd[1690]: time="2025-02-13T20:46:06.783906222Z" level=info msg="TearDown network for sandbox \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\" successfully" Feb 13 20:46:06.794396 containerd[1690]: time="2025-02-13T20:46:06.794291213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:46:06.794631 containerd[1690]: time="2025-02-13T20:46:06.794482717Z" level=info msg="RemovePodSandbox \"db9a8a744fd2960920c9d7fc056b5b618129ce6ef4c8c4d2b501243795c02205\" returns successfully" Feb 13 20:46:06.795261 containerd[1690]: time="2025-02-13T20:46:06.795225730Z" level=info msg="StopPodSandbox for \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\"" Feb 13 20:46:06.862525 containerd[1690]: 2025-02-13 20:46:06.827 [WARNING][5591] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0", GenerateName:"calico-kube-controllers-59fdbdb8b6-", Namespace:"calico-system", SelfLink:"", UID:"99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59fdbdb8b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe", Pod:"calico-kube-controllers-59fdbdb8b6-m5nsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ef6ead4beb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:06.862525 containerd[1690]: 2025-02-13 20:46:06.827 [INFO][5591] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:46:06.862525 containerd[1690]: 2025-02-13 20:46:06.827 [INFO][5591] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" iface="eth0" netns="" Feb 13 20:46:06.862525 containerd[1690]: 2025-02-13 20:46:06.827 [INFO][5591] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:46:06.862525 containerd[1690]: 2025-02-13 20:46:06.827 [INFO][5591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:46:06.862525 containerd[1690]: 2025-02-13 20:46:06.848 [INFO][5597] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" HandleID="k8s-pod-network.7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:46:06.862525 containerd[1690]: 2025-02-13 20:46:06.848 [INFO][5597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:06.862525 containerd[1690]: 2025-02-13 20:46:06.848 [INFO][5597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:06.862525 containerd[1690]: 2025-02-13 20:46:06.856 [WARNING][5597] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" HandleID="k8s-pod-network.7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:46:06.862525 containerd[1690]: 2025-02-13 20:46:06.856 [INFO][5597] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" HandleID="k8s-pod-network.7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:46:06.862525 containerd[1690]: 2025-02-13 20:46:06.858 [INFO][5597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:06.862525 containerd[1690]: 2025-02-13 20:46:06.860 [INFO][5591] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:46:06.862525 containerd[1690]: time="2025-02-13T20:46:06.862295967Z" level=info msg="TearDown network for sandbox \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\" successfully" Feb 13 20:46:06.862525 containerd[1690]: time="2025-02-13T20:46:06.862324068Z" level=info msg="StopPodSandbox for \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\" returns successfully" Feb 13 20:46:06.864928 containerd[1690]: time="2025-02-13T20:46:06.862616773Z" level=info msg="RemovePodSandbox for \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\"" Feb 13 20:46:06.864928 containerd[1690]: time="2025-02-13T20:46:06.862647074Z" level=info msg="Forcibly stopping sandbox \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\"" Feb 13 20:46:06.928035 containerd[1690]: 2025-02-13 20:46:06.899 [WARNING][5617] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0", GenerateName:"calico-kube-controllers-59fdbdb8b6-", Namespace:"calico-system", SelfLink:"", UID:"99b825d8-e1ce-486c-8a9a-fdc5d65f5ebf", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59fdbdb8b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"a0613796ca3875b7d8d72f22c0b2938268fde0844cfde805b77bb3da41dad7fe", Pod:"calico-kube-controllers-59fdbdb8b6-m5nsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ef6ead4beb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:06.928035 containerd[1690]: 2025-02-13 20:46:06.899 [INFO][5617] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:46:06.928035 containerd[1690]: 2025-02-13 20:46:06.899 [INFO][5617] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" iface="eth0" netns="" Feb 13 20:46:06.928035 containerd[1690]: 2025-02-13 20:46:06.899 [INFO][5617] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:46:06.928035 containerd[1690]: 2025-02-13 20:46:06.899 [INFO][5617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:46:06.928035 containerd[1690]: 2025-02-13 20:46:06.917 [INFO][5624] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" HandleID="k8s-pod-network.7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:46:06.928035 containerd[1690]: 2025-02-13 20:46:06.917 [INFO][5624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:06.928035 containerd[1690]: 2025-02-13 20:46:06.917 [INFO][5624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:06.928035 containerd[1690]: 2025-02-13 20:46:06.922 [WARNING][5624] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" HandleID="k8s-pod-network.7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:46:06.928035 containerd[1690]: 2025-02-13 20:46:06.922 [INFO][5624] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" HandleID="k8s-pod-network.7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--kube--controllers--59fdbdb8b6--m5nsm-eth0" Feb 13 20:46:06.928035 containerd[1690]: 2025-02-13 20:46:06.926 [INFO][5624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:06.928035 containerd[1690]: 2025-02-13 20:46:06.926 [INFO][5617] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497" Feb 13 20:46:06.928035 containerd[1690]: time="2025-02-13T20:46:06.927982379Z" level=info msg="TearDown network for sandbox \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\" successfully" Feb 13 20:46:06.937168 containerd[1690]: time="2025-02-13T20:46:06.937126847Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:46:06.937280 containerd[1690]: time="2025-02-13T20:46:06.937195549Z" level=info msg="RemovePodSandbox \"7d9b763f9750faf3396bbbaaf769caa019aa1afc26e3a547aa65a5cee2e86497\" returns successfully" Feb 13 20:46:06.937764 containerd[1690]: time="2025-02-13T20:46:06.937714758Z" level=info msg="StopPodSandbox for \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\"" Feb 13 20:46:06.999882 containerd[1690]: 2025-02-13 20:46:06.971 [WARNING][5642] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0", GenerateName:"calico-apiserver-5cd94c5f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"63e3fb52-bf2b-489c-b2b0-089fed67b060", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd94c5f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5", Pod:"calico-apiserver-5cd94c5f6c-cq9bs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03f2219244d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:06.999882 containerd[1690]: 2025-02-13 20:46:06.971 [INFO][5642] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:46:06.999882 containerd[1690]: 2025-02-13 20:46:06.971 [INFO][5642] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" iface="eth0" netns="" Feb 13 20:46:06.999882 containerd[1690]: 2025-02-13 20:46:06.971 [INFO][5642] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:46:06.999882 containerd[1690]: 2025-02-13 20:46:06.971 [INFO][5642] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:46:06.999882 containerd[1690]: 2025-02-13 20:46:06.989 [INFO][5648] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" HandleID="k8s-pod-network.be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:46:06.999882 containerd[1690]: 2025-02-13 20:46:06.989 [INFO][5648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:06.999882 containerd[1690]: 2025-02-13 20:46:06.989 [INFO][5648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:06.999882 containerd[1690]: 2025-02-13 20:46:06.995 [WARNING][5648] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" HandleID="k8s-pod-network.be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:46:06.999882 containerd[1690]: 2025-02-13 20:46:06.995 [INFO][5648] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" HandleID="k8s-pod-network.be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:46:06.999882 containerd[1690]: 2025-02-13 20:46:06.998 [INFO][5648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:06.999882 containerd[1690]: 2025-02-13 20:46:06.998 [INFO][5642] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:46:07.000643 containerd[1690]: time="2025-02-13T20:46:06.999912805Z" level=info msg="TearDown network for sandbox \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\" successfully" Feb 13 20:46:07.000643 containerd[1690]: time="2025-02-13T20:46:06.999942806Z" level=info msg="StopPodSandbox for \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\" returns successfully" Feb 13 20:46:07.000643 containerd[1690]: time="2025-02-13T20:46:07.000489716Z" level=info msg="RemovePodSandbox for \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\"" Feb 13 20:46:07.000643 containerd[1690]: time="2025-02-13T20:46:07.000523417Z" level=info msg="Forcibly stopping sandbox \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\"" Feb 13 20:46:07.057682 containerd[1690]: 2025-02-13 20:46:07.030 [WARNING][5666] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0", GenerateName:"calico-apiserver-5cd94c5f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"63e3fb52-bf2b-489c-b2b0-089fed67b060", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd94c5f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"895714085d916666a0d284d354218688813b2a623e56cdeeecc33ae2335f59d5", Pod:"calico-apiserver-5cd94c5f6c-cq9bs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03f2219244d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:07.057682 containerd[1690]: 2025-02-13 20:46:07.030 [INFO][5666] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:46:07.057682 containerd[1690]: 2025-02-13 20:46:07.030 [INFO][5666] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" iface="eth0" netns="" Feb 13 20:46:07.057682 containerd[1690]: 2025-02-13 20:46:07.030 [INFO][5666] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:46:07.057682 containerd[1690]: 2025-02-13 20:46:07.030 [INFO][5666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:46:07.057682 containerd[1690]: 2025-02-13 20:46:07.048 [INFO][5672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" HandleID="k8s-pod-network.be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:46:07.057682 containerd[1690]: 2025-02-13 20:46:07.048 [INFO][5672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:07.057682 containerd[1690]: 2025-02-13 20:46:07.048 [INFO][5672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:07.057682 containerd[1690]: 2025-02-13 20:46:07.054 [WARNING][5672] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" HandleID="k8s-pod-network.be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:46:07.057682 containerd[1690]: 2025-02-13 20:46:07.054 [INFO][5672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" HandleID="k8s-pod-network.be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--cq9bs-eth0" Feb 13 20:46:07.057682 containerd[1690]: 2025-02-13 20:46:07.055 [INFO][5672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:07.057682 containerd[1690]: 2025-02-13 20:46:07.056 [INFO][5666] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b" Feb 13 20:46:07.058605 containerd[1690]: time="2025-02-13T20:46:07.057731672Z" level=info msg="TearDown network for sandbox \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\" successfully" Feb 13 20:46:07.064421 containerd[1690]: time="2025-02-13T20:46:07.064382194Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:46:07.064544 containerd[1690]: time="2025-02-13T20:46:07.064454396Z" level=info msg="RemovePodSandbox \"be80994a5c1874db176d6823301c40eeff1302fc665f24497247bb353fc0f98b\" returns successfully" Feb 13 20:46:07.065467 containerd[1690]: time="2025-02-13T20:46:07.065422914Z" level=info msg="StopPodSandbox for \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\"" Feb 13 20:46:07.125926 containerd[1690]: 2025-02-13 20:46:07.098 [WARNING][5690] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd165ac4-5280-463e-90d2-d1e413c8b382", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6", Pod:"coredns-668d6bf9bc-4th5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1bc2aed7f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:07.125926 containerd[1690]: 2025-02-13 20:46:07.098 [INFO][5690] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:46:07.125926 containerd[1690]: 2025-02-13 20:46:07.098 [INFO][5690] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" iface="eth0" netns="" Feb 13 20:46:07.125926 containerd[1690]: 2025-02-13 20:46:07.098 [INFO][5690] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:46:07.125926 containerd[1690]: 2025-02-13 20:46:07.098 [INFO][5690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:46:07.125926 containerd[1690]: 2025-02-13 20:46:07.116 [INFO][5697] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" HandleID="k8s-pod-network.2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:46:07.125926 containerd[1690]: 2025-02-13 20:46:07.116 [INFO][5697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:07.125926 containerd[1690]: 2025-02-13 20:46:07.116 [INFO][5697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:07.125926 containerd[1690]: 2025-02-13 20:46:07.122 [WARNING][5697] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" HandleID="k8s-pod-network.2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:46:07.125926 containerd[1690]: 2025-02-13 20:46:07.122 [INFO][5697] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" HandleID="k8s-pod-network.2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:46:07.125926 containerd[1690]: 2025-02-13 20:46:07.123 [INFO][5697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:07.125926 containerd[1690]: 2025-02-13 20:46:07.124 [INFO][5690] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:46:07.126886 containerd[1690]: time="2025-02-13T20:46:07.125942430Z" level=info msg="TearDown network for sandbox \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\" successfully" Feb 13 20:46:07.126886 containerd[1690]: time="2025-02-13T20:46:07.126011331Z" level=info msg="StopPodSandbox for \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\" returns successfully" Feb 13 20:46:07.126886 containerd[1690]: time="2025-02-13T20:46:07.126578941Z" level=info msg="RemovePodSandbox for \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\"" Feb 13 20:46:07.126886 containerd[1690]: time="2025-02-13T20:46:07.126611942Z" level=info msg="Forcibly stopping sandbox \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\"" Feb 13 20:46:07.184865 containerd[1690]: 2025-02-13 20:46:07.157 [WARNING][5715] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd165ac4-5280-463e-90d2-d1e413c8b382", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"f10dc2f6784b4451af25e3d3d3768b1a82d9890ead2f8bf153ac87b53febf6d6", Pod:"coredns-668d6bf9bc-4th5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1bc2aed7f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:07.184865 containerd[1690]: 2025-02-13 20:46:07.157 [INFO][5715] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:46:07.184865 containerd[1690]: 2025-02-13 20:46:07.157 [INFO][5715] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" iface="eth0" netns="" Feb 13 20:46:07.184865 containerd[1690]: 2025-02-13 20:46:07.157 [INFO][5715] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:46:07.184865 containerd[1690]: 2025-02-13 20:46:07.157 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:46:07.184865 containerd[1690]: 2025-02-13 20:46:07.175 [INFO][5722] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" HandleID="k8s-pod-network.2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:46:07.184865 containerd[1690]: 2025-02-13 20:46:07.176 [INFO][5722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:07.184865 containerd[1690]: 2025-02-13 20:46:07.176 [INFO][5722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:07.184865 containerd[1690]: 2025-02-13 20:46:07.181 [WARNING][5722] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" HandleID="k8s-pod-network.2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:46:07.184865 containerd[1690]: 2025-02-13 20:46:07.181 [INFO][5722] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" HandleID="k8s-pod-network.2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Workload="ci--4081.3.1--a--d679334e6e-k8s-coredns--668d6bf9bc--4th5s-eth0" Feb 13 20:46:07.184865 containerd[1690]: 2025-02-13 20:46:07.182 [INFO][5722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:07.184865 containerd[1690]: 2025-02-13 20:46:07.183 [INFO][5715] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24" Feb 13 20:46:07.184865 containerd[1690]: time="2025-02-13T20:46:07.184795815Z" level=info msg="TearDown network for sandbox \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\" successfully" Feb 13 20:46:07.193420 containerd[1690]: time="2025-02-13T20:46:07.193334473Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:46:07.193539 containerd[1690]: time="2025-02-13T20:46:07.193451475Z" level=info msg="RemovePodSandbox \"2978a97751eec7f8767fbb2c5b88083156774c039be9f7c09a7fb47075dabf24\" returns successfully" Feb 13 20:46:07.194047 containerd[1690]: time="2025-02-13T20:46:07.194011485Z" level=info msg="StopPodSandbox for \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\"" Feb 13 20:46:07.253222 containerd[1690]: 2025-02-13 20:46:07.225 [WARNING][5740] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0", GenerateName:"calico-apiserver-5cd94c5f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b078f36-321f-46d6-b74f-37d7f4d0e5a4", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd94c5f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0", Pod:"calico-apiserver-5cd94c5f6c-ffdzp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc85708b8f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:07.253222 containerd[1690]: 2025-02-13 20:46:07.225 [INFO][5740] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:46:07.253222 containerd[1690]: 2025-02-13 20:46:07.225 [INFO][5740] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" iface="eth0" netns="" Feb 13 20:46:07.253222 containerd[1690]: 2025-02-13 20:46:07.225 [INFO][5740] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:46:07.253222 containerd[1690]: 2025-02-13 20:46:07.225 [INFO][5740] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:46:07.253222 containerd[1690]: 2025-02-13 20:46:07.244 [INFO][5746] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" HandleID="k8s-pod-network.8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:46:07.253222 containerd[1690]: 2025-02-13 20:46:07.245 [INFO][5746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:07.253222 containerd[1690]: 2025-02-13 20:46:07.245 [INFO][5746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:07.253222 containerd[1690]: 2025-02-13 20:46:07.250 [WARNING][5746] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" HandleID="k8s-pod-network.8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:46:07.253222 containerd[1690]: 2025-02-13 20:46:07.250 [INFO][5746] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" HandleID="k8s-pod-network.8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:46:07.253222 containerd[1690]: 2025-02-13 20:46:07.251 [INFO][5746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:07.253222 containerd[1690]: 2025-02-13 20:46:07.252 [INFO][5740] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:46:07.254077 containerd[1690]: time="2025-02-13T20:46:07.253297579Z" level=info msg="TearDown network for sandbox \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\" successfully" Feb 13 20:46:07.254077 containerd[1690]: time="2025-02-13T20:46:07.253329179Z" level=info msg="StopPodSandbox for \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\" returns successfully" Feb 13 20:46:07.254077 containerd[1690]: time="2025-02-13T20:46:07.253923690Z" level=info msg="RemovePodSandbox for \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\"" Feb 13 20:46:07.254077 containerd[1690]: time="2025-02-13T20:46:07.253978791Z" level=info msg="Forcibly stopping sandbox \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\"" Feb 13 20:46:07.314873 containerd[1690]: 2025-02-13 20:46:07.286 [WARNING][5764] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0", GenerateName:"calico-apiserver-5cd94c5f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b078f36-321f-46d6-b74f-37d7f4d0e5a4", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd94c5f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"c478429cb12124770b15df6102c84791ce581516d6f4e9c9834f5ea729a320e0", Pod:"calico-apiserver-5cd94c5f6c-ffdzp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc85708b8f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:07.314873 containerd[1690]: 2025-02-13 20:46:07.286 [INFO][5764] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:46:07.314873 containerd[1690]: 2025-02-13 20:46:07.286 [INFO][5764] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" iface="eth0" netns="" Feb 13 20:46:07.314873 containerd[1690]: 2025-02-13 20:46:07.286 [INFO][5764] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:46:07.314873 containerd[1690]: 2025-02-13 20:46:07.286 [INFO][5764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:46:07.314873 containerd[1690]: 2025-02-13 20:46:07.306 [INFO][5770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" HandleID="k8s-pod-network.8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:46:07.314873 containerd[1690]: 2025-02-13 20:46:07.306 [INFO][5770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:07.314873 containerd[1690]: 2025-02-13 20:46:07.306 [INFO][5770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:07.314873 containerd[1690]: 2025-02-13 20:46:07.311 [WARNING][5770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" HandleID="k8s-pod-network.8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:46:07.314873 containerd[1690]: 2025-02-13 20:46:07.311 [INFO][5770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" HandleID="k8s-pod-network.8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Workload="ci--4081.3.1--a--d679334e6e-k8s-calico--apiserver--5cd94c5f6c--ffdzp-eth0" Feb 13 20:46:07.314873 containerd[1690]: 2025-02-13 20:46:07.313 [INFO][5770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:07.314873 containerd[1690]: 2025-02-13 20:46:07.313 [INFO][5764] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013" Feb 13 20:46:07.315640 containerd[1690]: time="2025-02-13T20:46:07.314988916Z" level=info msg="TearDown network for sandbox \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\" successfully" Feb 13 20:46:07.321608 containerd[1690]: time="2025-02-13T20:46:07.321565138Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:46:07.321728 containerd[1690]: time="2025-02-13T20:46:07.321633939Z" level=info msg="RemovePodSandbox \"8c5b58a458c396c72e5d3084146fb9098b97c9f9635d130d9821667b4337b013\" returns successfully" Feb 13 20:46:07.322252 containerd[1690]: time="2025-02-13T20:46:07.322174549Z" level=info msg="StopPodSandbox for \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\"" Feb 13 20:46:07.382656 containerd[1690]: 2025-02-13 20:46:07.354 [WARNING][5788] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bdc0f384-e713-4490-b32f-30642a7169b0", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd", Pod:"csi-node-driver-4jn87", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5f2e90379bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:07.382656 containerd[1690]: 2025-02-13 20:46:07.354 [INFO][5788] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:46:07.382656 containerd[1690]: 2025-02-13 20:46:07.354 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" iface="eth0" netns="" Feb 13 20:46:07.382656 containerd[1690]: 2025-02-13 20:46:07.354 [INFO][5788] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:46:07.382656 containerd[1690]: 2025-02-13 20:46:07.354 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:46:07.382656 containerd[1690]: 2025-02-13 20:46:07.372 [INFO][5795] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" HandleID="k8s-pod-network.8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Workload="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:46:07.382656 containerd[1690]: 2025-02-13 20:46:07.372 [INFO][5795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:07.382656 containerd[1690]: 2025-02-13 20:46:07.373 [INFO][5795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:07.382656 containerd[1690]: 2025-02-13 20:46:07.379 [WARNING][5795] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" HandleID="k8s-pod-network.8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Workload="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:46:07.382656 containerd[1690]: 2025-02-13 20:46:07.379 [INFO][5795] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" HandleID="k8s-pod-network.8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Workload="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:46:07.382656 containerd[1690]: 2025-02-13 20:46:07.380 [INFO][5795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:07.382656 containerd[1690]: 2025-02-13 20:46:07.381 [INFO][5788] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:46:07.383536 containerd[1690]: time="2025-02-13T20:46:07.382699965Z" level=info msg="TearDown network for sandbox \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\" successfully" Feb 13 20:46:07.383536 containerd[1690]: time="2025-02-13T20:46:07.382729466Z" level=info msg="StopPodSandbox for \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\" returns successfully" Feb 13 20:46:07.383536 containerd[1690]: time="2025-02-13T20:46:07.383362677Z" level=info msg="RemovePodSandbox for \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\"" Feb 13 20:46:07.383536 containerd[1690]: time="2025-02-13T20:46:07.383397678Z" level=info msg="Forcibly stopping sandbox \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\"" Feb 13 20:46:07.445317 containerd[1690]: 2025-02-13 20:46:07.417 [WARNING][5814] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bdc0f384-e713-4490-b32f-30642a7169b0", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d679334e6e", ContainerID:"161986cd3bfacf0c8c66ee9393f290bad6b14ecfdf99ea1d06f69b913d1baffd", Pod:"csi-node-driver-4jn87", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5f2e90379bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:07.445317 containerd[1690]: 2025-02-13 20:46:07.417 [INFO][5814] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:46:07.445317 containerd[1690]: 2025-02-13 20:46:07.417 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" iface="eth0" netns="" Feb 13 20:46:07.445317 containerd[1690]: 2025-02-13 20:46:07.417 [INFO][5814] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:46:07.445317 containerd[1690]: 2025-02-13 20:46:07.417 [INFO][5814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:46:07.445317 containerd[1690]: 2025-02-13 20:46:07.436 [INFO][5820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" HandleID="k8s-pod-network.8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Workload="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:46:07.445317 containerd[1690]: 2025-02-13 20:46:07.437 [INFO][5820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:07.445317 containerd[1690]: 2025-02-13 20:46:07.437 [INFO][5820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:07.445317 containerd[1690]: 2025-02-13 20:46:07.442 [WARNING][5820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" HandleID="k8s-pod-network.8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Workload="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:46:07.445317 containerd[1690]: 2025-02-13 20:46:07.442 [INFO][5820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" HandleID="k8s-pod-network.8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Workload="ci--4081.3.1--a--d679334e6e-k8s-csi--node--driver--4jn87-eth0" Feb 13 20:46:07.445317 containerd[1690]: 2025-02-13 20:46:07.443 [INFO][5820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:07.445317 containerd[1690]: 2025-02-13 20:46:07.444 [INFO][5814] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5" Feb 13 20:46:07.445317 containerd[1690]: time="2025-02-13T20:46:07.445288719Z" level=info msg="TearDown network for sandbox \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\" successfully" Feb 13 20:46:07.454605 containerd[1690]: time="2025-02-13T20:46:07.454564290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:46:07.454723 containerd[1690]: time="2025-02-13T20:46:07.454628791Z" level=info msg="RemovePodSandbox \"8b153adf407c45d2551a24684f05c966102693d37df5b34fd386ca2506bff8e5\" returns successfully" Feb 13 20:46:14.923164 kubelet[3212]: I0213 20:46:14.923098 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cd94c5f6c-cq9bs" podStartSLOduration=48.06703944 podStartE2EDuration="55.923077502s" podCreationTimestamp="2025-02-13 20:45:19 +0000 UTC" firstStartedPulling="2025-02-13 20:45:49.186595722 +0000 UTC m=+43.193747692" lastFinishedPulling="2025-02-13 20:45:57.042633684 +0000 UTC m=+51.049785754" observedRunningTime="2025-02-13 20:45:57.952972396 +0000 UTC m=+51.960124366" watchObservedRunningTime="2025-02-13 20:46:14.923077502 +0000 UTC m=+68.930229472" Feb 13 20:46:20.836327 kubelet[3212]: I0213 20:46:20.836024 3212 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:46:24.720811 kubelet[3212]: I0213 20:46:24.720345 3212 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:47:25.947633 systemd[1]: run-containerd-runc-k8s.io-7c52480ad37dfa20c9f8c27b3c464b376886aaa9772d61eb1bd9285a721b4695-runc.k7PdOM.mount: Deactivated successfully. Feb 13 20:47:35.552268 systemd[1]: Started sshd@7-10.200.8.4:22-10.200.16.10:53402.service - OpenSSH per-connection server daemon (10.200.16.10:53402). Feb 13 20:47:36.175877 sshd[6019]: Accepted publickey for core from 10.200.16.10 port 53402 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:47:36.178005 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:36.182231 systemd-logind[1667]: New session 10 of user core. Feb 13 20:47:36.192102 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:47:36.685151 sshd[6019]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:36.688833 systemd[1]: sshd@7-10.200.8.4:22-10.200.16.10:53402.service: Deactivated successfully. Feb 13 20:47:36.691102 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:47:36.692348 systemd-logind[1667]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:47:36.693467 systemd-logind[1667]: Removed session 10. Feb 13 20:47:41.800310 systemd[1]: Started sshd@8-10.200.8.4:22-10.200.16.10:49176.service - OpenSSH per-connection server daemon (10.200.16.10:49176). Feb 13 20:47:42.420560 sshd[6033]: Accepted publickey for core from 10.200.16.10 port 49176 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:47:42.422348 sshd[6033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:42.427092 systemd-logind[1667]: New session 11 of user core. Feb 13 20:47:42.435364 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:47:42.919728 sshd[6033]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:42.922716 systemd[1]: sshd@8-10.200.8.4:22-10.200.16.10:49176.service: Deactivated successfully. Feb 13 20:47:42.924836 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:47:42.926494 systemd-logind[1667]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:47:42.927662 systemd-logind[1667]: Removed session 11. Feb 13 20:47:48.030523 systemd[1]: Started sshd@9-10.200.8.4:22-10.200.16.10:49184.service - OpenSSH per-connection server daemon (10.200.16.10:49184). Feb 13 20:47:48.653706 sshd[6071]: Accepted publickey for core from 10.200.16.10 port 49184 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:47:48.655208 sshd[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:48.659716 systemd-logind[1667]: New session 12 of user core. Feb 13 20:47:48.663133 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:47:49.156174 sshd[6071]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:49.159276 systemd[1]: sshd@9-10.200.8.4:22-10.200.16.10:49184.service: Deactivated successfully. Feb 13 20:47:49.161614 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:47:49.163341 systemd-logind[1667]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:47:49.164634 systemd-logind[1667]: Removed session 12. Feb 13 20:47:54.270257 systemd[1]: Started sshd@10-10.200.8.4:22-10.200.16.10:49388.service - OpenSSH per-connection server daemon (10.200.16.10:49388). Feb 13 20:47:54.890834 sshd[6103]: Accepted publickey for core from 10.200.16.10 port 49388 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:47:54.892619 sshd[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:54.898259 systemd-logind[1667]: New session 13 of user core. Feb 13 20:47:54.908123 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:47:55.388731 sshd[6103]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:55.392316 systemd[1]: sshd@10-10.200.8.4:22-10.200.16.10:49388.service: Deactivated successfully. Feb 13 20:47:55.395129 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:47:55.397300 systemd-logind[1667]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:47:55.398507 systemd-logind[1667]: Removed session 13. Feb 13 20:47:55.499190 systemd[1]: Started sshd@11-10.200.8.4:22-10.200.16.10:49400.service - OpenSSH per-connection server daemon (10.200.16.10:49400). Feb 13 20:47:56.128980 sshd[6117]: Accepted publickey for core from 10.200.16.10 port 49400 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:47:56.130875 sshd[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:56.135850 systemd-logind[1667]: New session 14 of user core. Feb 13 20:47:56.140136 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:47:56.661787 sshd[6117]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:56.665161 systemd[1]: sshd@11-10.200.8.4:22-10.200.16.10:49400.service: Deactivated successfully. Feb 13 20:47:56.667433 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:47:56.669016 systemd-logind[1667]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:47:56.670184 systemd-logind[1667]: Removed session 14. Feb 13 20:47:56.772215 systemd[1]: Started sshd@12-10.200.8.4:22-10.200.16.10:49408.service - OpenSSH per-connection server daemon (10.200.16.10:49408). Feb 13 20:47:57.399565 sshd[6146]: Accepted publickey for core from 10.200.16.10 port 49408 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:47:57.401392 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:57.406153 systemd-logind[1667]: New session 15 of user core. Feb 13 20:47:57.412108 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:47:57.899371 sshd[6146]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:57.902631 systemd[1]: sshd@12-10.200.8.4:22-10.200.16.10:49408.service: Deactivated successfully. Feb 13 20:47:57.904955 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:47:57.906715 systemd-logind[1667]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:47:57.908090 systemd-logind[1667]: Removed session 15. Feb 13 20:48:03.014250 systemd[1]: Started sshd@13-10.200.8.4:22-10.200.16.10:38754.service - OpenSSH per-connection server daemon (10.200.16.10:38754). Feb 13 20:48:03.634256 sshd[6164]: Accepted publickey for core from 10.200.16.10 port 38754 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:03.635755 sshd[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:03.640545 systemd-logind[1667]: New session 16 of user core. Feb 13 20:48:03.649114 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:48:04.133462 sshd[6164]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:04.136542 systemd[1]: sshd@13-10.200.8.4:22-10.200.16.10:38754.service: Deactivated successfully. Feb 13 20:48:04.138861 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:48:04.140776 systemd-logind[1667]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:48:04.141884 systemd-logind[1667]: Removed session 16. Feb 13 20:48:09.244943 systemd[1]: Started sshd@14-10.200.8.4:22-10.200.16.10:47852.service - OpenSSH per-connection server daemon (10.200.16.10:47852). Feb 13 20:48:09.543450 update_engine[1669]: I20250213 20:48:09.543382 1669 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 20:48:09.543450 update_engine[1669]: I20250213 20:48:09.543442 1669 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 20:48:09.544084 update_engine[1669]: I20250213 20:48:09.543694 1669 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 20:48:09.544371 update_engine[1669]: I20250213 20:48:09.544332 1669 omaha_request_params.cc:62] Current group set to lts Feb 13 20:48:09.544532 update_engine[1669]: I20250213 20:48:09.544498 1669 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 20:48:09.544532 update_engine[1669]: I20250213 20:48:09.544518 1669 update_attempter.cc:643] Scheduling an action processor start. Feb 13 20:48:09.544637 update_engine[1669]: I20250213 20:48:09.544540 1669 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:48:09.544637 update_engine[1669]: I20250213 20:48:09.544582 1669 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 20:48:09.544723 update_engine[1669]: I20250213 20:48:09.544662 1669 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:48:09.544723 update_engine[1669]: I20250213 20:48:09.544674 1669 omaha_request_action.cc:272] Request: Feb 13 20:48:09.544723 update_engine[1669]: Feb 13 20:48:09.544723 update_engine[1669]: Feb 13 20:48:09.544723 update_engine[1669]: Feb 13 20:48:09.544723 update_engine[1669]: Feb 13 20:48:09.544723 update_engine[1669]: Feb 13 20:48:09.544723 update_engine[1669]: Feb 13 20:48:09.544723 update_engine[1669]: Feb 13 20:48:09.544723 update_engine[1669]: Feb 13 20:48:09.544723 update_engine[1669]: I20250213 20:48:09.544683 1669 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:48:09.545383 locksmithd[1700]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 20:48:09.546716 update_engine[1669]: I20250213 20:48:09.546680 1669 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:48:09.547082 update_engine[1669]: I20250213 20:48:09.547046 1669 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:48:09.570148 update_engine[1669]: E20250213 20:48:09.570082 1669 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:48:09.570297 update_engine[1669]: I20250213 20:48:09.570202 1669 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 20:48:09.868175 sshd[6179]: Accepted publickey for core from 10.200.16.10 port 47852 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:09.870423 sshd[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:09.875026 systemd-logind[1667]: New session 17 of user core. Feb 13 20:48:09.884147 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:48:10.368717 sshd[6179]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:10.371884 systemd[1]: sshd@14-10.200.8.4:22-10.200.16.10:47852.service: Deactivated successfully. Feb 13 20:48:10.374185 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:48:10.375847 systemd-logind[1667]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:48:10.377139 systemd-logind[1667]: Removed session 17. Feb 13 20:48:15.484271 systemd[1]: Started sshd@15-10.200.8.4:22-10.200.16.10:47868.service - OpenSSH per-connection server daemon (10.200.16.10:47868). Feb 13 20:48:16.103113 sshd[6216]: Accepted publickey for core from 10.200.16.10 port 47868 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:16.104648 sshd[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:16.109071 systemd-logind[1667]: New session 18 of user core. Feb 13 20:48:16.116120 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:48:16.602444 sshd[6216]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:16.605932 systemd[1]: sshd@15-10.200.8.4:22-10.200.16.10:47868.service: Deactivated successfully. Feb 13 20:48:16.608620 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:48:16.610812 systemd-logind[1667]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:48:16.611769 systemd-logind[1667]: Removed session 18. Feb 13 20:48:19.540251 update_engine[1669]: I20250213 20:48:19.540164 1669 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:48:19.540782 update_engine[1669]: I20250213 20:48:19.540499 1669 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:48:19.540845 update_engine[1669]: I20250213 20:48:19.540797 1669 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:48:19.561839 update_engine[1669]: E20250213 20:48:19.561760 1669 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:48:19.561988 update_engine[1669]: I20250213 20:48:19.561865 1669 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 20:48:21.721244 systemd[1]: Started sshd@16-10.200.8.4:22-10.200.16.10:56070.service - OpenSSH per-connection server daemon (10.200.16.10:56070). Feb 13 20:48:22.340712 sshd[6229]: Accepted publickey for core from 10.200.16.10 port 56070 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:22.342539 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:22.347057 systemd-logind[1667]: New session 19 of user core. Feb 13 20:48:22.351111 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:48:22.842383 sshd[6229]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:22.846165 systemd[1]: sshd@16-10.200.8.4:22-10.200.16.10:56070.service: Deactivated successfully. Feb 13 20:48:22.849050 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:48:22.851160 systemd-logind[1667]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:48:22.852542 systemd-logind[1667]: Removed session 19. Feb 13 20:48:22.953053 systemd[1]: Started sshd@17-10.200.8.4:22-10.200.16.10:56086.service - OpenSSH per-connection server daemon (10.200.16.10:56086). Feb 13 20:48:23.574625 sshd[6241]: Accepted publickey for core from 10.200.16.10 port 56086 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:23.576710 sshd[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:23.581426 systemd-logind[1667]: New session 20 of user core. Feb 13 20:48:23.586138 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:48:24.130198 sshd[6241]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:24.134058 systemd[1]: sshd@17-10.200.8.4:22-10.200.16.10:56086.service: Deactivated successfully. Feb 13 20:48:24.136458 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:48:24.137976 systemd-logind[1667]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:48:24.139059 systemd-logind[1667]: Removed session 20. Feb 13 20:48:24.241004 systemd[1]: Started sshd@18-10.200.8.4:22-10.200.16.10:56094.service - OpenSSH per-connection server daemon (10.200.16.10:56094). Feb 13 20:48:24.862030 sshd[6252]: Accepted publickey for core from 10.200.16.10 port 56094 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:24.863680 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:24.868926 systemd-logind[1667]: New session 21 of user core. Feb 13 20:48:24.877391 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:48:26.212320 sshd[6252]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:26.216630 systemd[1]: sshd@18-10.200.8.4:22-10.200.16.10:56094.service: Deactivated successfully. Feb 13 20:48:26.218861 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:48:26.219800 systemd-logind[1667]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:48:26.220856 systemd-logind[1667]: Removed session 21. Feb 13 20:48:26.331273 systemd[1]: Started sshd@19-10.200.8.4:22-10.200.16.10:56096.service - OpenSSH per-connection server daemon (10.200.16.10:56096). Feb 13 20:48:26.952971 sshd[6295]: Accepted publickey for core from 10.200.16.10 port 56096 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:26.954498 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:26.958492 systemd-logind[1667]: New session 22 of user core. Feb 13 20:48:26.964140 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:48:27.559728 sshd[6295]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:27.562698 systemd[1]: sshd@19-10.200.8.4:22-10.200.16.10:56096.service: Deactivated successfully. Feb 13 20:48:27.564889 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:48:27.566373 systemd-logind[1667]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:48:27.567648 systemd-logind[1667]: Removed session 22. Feb 13 20:48:27.670209 systemd[1]: Started sshd@20-10.200.8.4:22-10.200.16.10:56102.service - OpenSSH per-connection server daemon (10.200.16.10:56102). Feb 13 20:48:28.291974 sshd[6306]: Accepted publickey for core from 10.200.16.10 port 56102 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:28.293482 sshd[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:28.297449 systemd-logind[1667]: New session 23 of user core. Feb 13 20:48:28.301133 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:48:28.794309 sshd[6306]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:28.798827 systemd[1]: sshd@20-10.200.8.4:22-10.200.16.10:56102.service: Deactivated successfully. Feb 13 20:48:28.801341 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:48:28.802405 systemd-logind[1667]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:48:28.803608 systemd-logind[1667]: Removed session 23. Feb 13 20:48:29.538824 update_engine[1669]: I20250213 20:48:29.538735 1669 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:48:29.539384 update_engine[1669]: I20250213 20:48:29.539114 1669 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:48:29.539478 update_engine[1669]: I20250213 20:48:29.539436 1669 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:48:29.703862 update_engine[1669]: E20250213 20:48:29.703751 1669 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:48:29.704183 update_engine[1669]: I20250213 20:48:29.703948 1669 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 20:48:33.913248 systemd[1]: Started sshd@21-10.200.8.4:22-10.200.16.10:40594.service - OpenSSH per-connection server daemon (10.200.16.10:40594). Feb 13 20:48:34.533654 sshd[6321]: Accepted publickey for core from 10.200.16.10 port 40594 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:34.535195 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:34.539947 systemd-logind[1667]: New session 24 of user core. Feb 13 20:48:34.545127 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:48:35.062174 sshd[6321]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:35.065946 systemd[1]: sshd@21-10.200.8.4:22-10.200.16.10:40594.service: Deactivated successfully. Feb 13 20:48:35.068142 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:48:35.068899 systemd-logind[1667]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:48:35.069899 systemd-logind[1667]: Removed session 24. Feb 13 20:48:39.538245 update_engine[1669]: I20250213 20:48:39.538159 1669 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:48:39.538768 update_engine[1669]: I20250213 20:48:39.538514 1669 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:48:39.538869 update_engine[1669]: I20250213 20:48:39.538829 1669 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:48:39.562141 update_engine[1669]: E20250213 20:48:39.562069 1669 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:48:39.562335 update_engine[1669]: I20250213 20:48:39.562165 1669 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:48:39.562335 update_engine[1669]: I20250213 20:48:39.562177 1669 omaha_request_action.cc:617] Omaha request response: Feb 13 20:48:39.562335 update_engine[1669]: E20250213 20:48:39.562275 1669 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 20:48:39.562335 update_engine[1669]: I20250213 20:48:39.562301 1669 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 20:48:39.562335 update_engine[1669]: I20250213 20:48:39.562309 1669 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:48:39.562335 update_engine[1669]: I20250213 20:48:39.562328 1669 update_attempter.cc:306] Processing Done. Feb 13 20:48:39.562616 update_engine[1669]: E20250213 20:48:39.562348 1669 update_attempter.cc:619] Update failed. Feb 13 20:48:39.562616 update_engine[1669]: I20250213 20:48:39.562358 1669 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 20:48:39.562616 update_engine[1669]: I20250213 20:48:39.562367 1669 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 20:48:39.562616 update_engine[1669]: I20250213 20:48:39.562377 1669 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 20:48:39.562616 update_engine[1669]: I20250213 20:48:39.562484 1669 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:48:39.562616 update_engine[1669]: I20250213 20:48:39.562517 1669 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:48:39.562616 update_engine[1669]: I20250213 20:48:39.562527 1669 omaha_request_action.cc:272] Request: Feb 13 20:48:39.562616 update_engine[1669]: Feb 13 20:48:39.562616 update_engine[1669]: Feb 13 20:48:39.562616 update_engine[1669]: Feb 13 20:48:39.562616 update_engine[1669]: Feb 13 20:48:39.562616 update_engine[1669]: Feb 13 20:48:39.562616 update_engine[1669]: Feb 13 20:48:39.562616 update_engine[1669]: I20250213 20:48:39.562537 1669 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:48:39.563302 update_engine[1669]: I20250213 20:48:39.562764 1669 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:48:39.563302 update_engine[1669]: I20250213 20:48:39.563082 1669 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:48:39.563487 locksmithd[1700]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 20:48:39.587742 update_engine[1669]: E20250213 20:48:39.587670 1669 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:48:39.587898 update_engine[1669]: I20250213 20:48:39.587766 1669 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:48:39.587898 update_engine[1669]: I20250213 20:48:39.587779 1669 omaha_request_action.cc:617] Omaha request response: Feb 13 20:48:39.587898 update_engine[1669]: I20250213 20:48:39.587791 1669 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:48:39.587898 update_engine[1669]: I20250213 20:48:39.587800 1669 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:48:39.587898 update_engine[1669]: I20250213 20:48:39.587807 1669 update_attempter.cc:306] Processing Done. Feb 13 20:48:39.587898 update_engine[1669]: I20250213 20:48:39.587818 1669 update_attempter.cc:310] Error event sent. Feb 13 20:48:39.587898 update_engine[1669]: I20250213 20:48:39.587832 1669 update_check_scheduler.cc:74] Next update check in 40m30s Feb 13 20:48:39.588326 locksmithd[1700]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 20:48:40.177725 systemd[1]: Started sshd@22-10.200.8.4:22-10.200.16.10:45362.service - OpenSSH per-connection server daemon (10.200.16.10:45362). Feb 13 20:48:40.796412 sshd[6336]: Accepted publickey for core from 10.200.16.10 port 45362 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:40.798229 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:40.802454 systemd-logind[1667]: New session 25 of user core. Feb 13 20:48:40.804189 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:48:41.298476 sshd[6336]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:41.301771 systemd[1]: sshd@22-10.200.8.4:22-10.200.16.10:45362.service: Deactivated successfully. Feb 13 20:48:41.303955 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:48:41.305838 systemd-logind[1667]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:48:41.306884 systemd-logind[1667]: Removed session 25. Feb 13 20:48:46.410180 systemd[1]: Started sshd@23-10.200.8.4:22-10.200.16.10:45368.service - OpenSSH per-connection server daemon (10.200.16.10:45368). Feb 13 20:48:47.034611 sshd[6375]: Accepted publickey for core from 10.200.16.10 port 45368 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:47.036436 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:47.042108 systemd-logind[1667]: New session 26 of user core. Feb 13 20:48:47.050110 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:48:47.531337 sshd[6375]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:47.534912 systemd[1]: sshd@23-10.200.8.4:22-10.200.16.10:45368.service: Deactivated successfully. Feb 13 20:48:47.537307 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:48:47.538784 systemd-logind[1667]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:48:47.539924 systemd-logind[1667]: Removed session 26. Feb 13 20:48:52.647253 systemd[1]: Started sshd@24-10.200.8.4:22-10.200.16.10:57608.service - OpenSSH per-connection server daemon (10.200.16.10:57608). Feb 13 20:48:53.268914 sshd[6406]: Accepted publickey for core from 10.200.16.10 port 57608 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:53.270491 sshd[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:53.276632 systemd-logind[1667]: New session 27 of user core. Feb 13 20:48:53.283136 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:48:53.764000 sshd[6406]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:53.767869 systemd[1]: sshd@24-10.200.8.4:22-10.200.16.10:57608.service: Deactivated successfully. Feb 13 20:48:53.770665 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:48:53.772687 systemd-logind[1667]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:48:53.773800 systemd-logind[1667]: Removed session 27. Feb 13 20:48:58.879252 systemd[1]: Started sshd@25-10.200.8.4:22-10.200.16.10:57618.service - OpenSSH per-connection server daemon (10.200.16.10:57618). Feb 13 20:48:59.498562 sshd[6450]: Accepted publickey for core from 10.200.16.10 port 57618 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:59.500146 sshd[6450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:59.505021 systemd-logind[1667]: New session 28 of user core. Feb 13 20:48:59.512120 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:48:59.995827 sshd[6450]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:59.998832 systemd[1]: sshd@25-10.200.8.4:22-10.200.16.10:57618.service: Deactivated successfully. Feb 13 20:49:00.001180 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:49:00.003515 systemd-logind[1667]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:49:00.004659 systemd-logind[1667]: Removed session 28. Feb 13 20:49:05.105238 systemd[1]: Started sshd@26-10.200.8.4:22-10.200.16.10:53868.service - OpenSSH per-connection server daemon (10.200.16.10:53868). Feb 13 20:49:05.733330 sshd[6475]: Accepted publickey for core from 10.200.16.10 port 53868 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:49:05.734824 sshd[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:49:05.739408 systemd-logind[1667]: New session 29 of user core. Feb 13 20:49:05.747116 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:49:06.231494 sshd[6475]: pam_unix(sshd:session): session closed for user core Feb 13 20:49:06.234857 systemd[1]: sshd@26-10.200.8.4:22-10.200.16.10:53868.service: Deactivated successfully. Feb 13 20:49:06.237475 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:49:06.239226 systemd-logind[1667]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:49:06.240362 systemd-logind[1667]: Removed session 29.