Apr 30 03:28:07.046641 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:28:07.046667 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:07.046677 kernel: BIOS-provided physical RAM map: Apr 30 03:28:07.046685 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:28:07.046691 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 30 03:28:07.046700 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Apr 30 03:28:07.046708 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Apr 30 03:28:07.046717 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Apr 30 03:28:07.046726 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 30 03:28:07.046732 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 30 03:28:07.046740 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 30 03:28:07.046747 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 30 03:28:07.046753 kernel: printk: bootconsole [earlyser0] enabled Apr 30 03:28:07.046760 kernel: NX (Execute Disable) protection: active Apr 30 03:28:07.046772 kernel: APIC: Static calls initialized Apr 30 03:28:07.046779 kernel: efi: EFI v2.7 by Microsoft Apr 30 03:28:07.046790 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Apr 30 03:28:07.046797 kernel: SMBIOS 3.1.0 present. Apr 30 03:28:07.046804 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Apr 30 03:28:07.046817 kernel: Hypervisor detected: Microsoft Hyper-V Apr 30 03:28:07.046825 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 30 03:28:07.046834 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Apr 30 03:28:07.046840 kernel: Hyper-V: Nested features: 0x1e0101 Apr 30 03:28:07.046864 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 30 03:28:07.046876 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 30 03:28:07.046883 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:28:07.046892 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:28:07.046901 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 30 03:28:07.046908 kernel: tsc: Detected 2593.904 MHz processor Apr 30 03:28:07.046918 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:28:07.046926 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:28:07.046935 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 30 03:28:07.046943 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:28:07.046955 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:28:07.046962 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 30 03:28:07.046970 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 30 03:28:07.046979 kernel: Using GB pages for direct mapping Apr 30 03:28:07.046986 kernel: Secure boot disabled Apr 30 03:28:07.046994 kernel: ACPI: Early table checksum verification disabled Apr 30 03:28:07.047003 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 30 03:28:07.047014 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047025 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047032 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Apr 30 03:28:07.047043 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 30 03:28:07.047051 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047059 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047069 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047078 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047088 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047096 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047105 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047114 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 30 03:28:07.047121 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Apr 30 03:28:07.047132 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 30 03:28:07.047139 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 30 03:28:07.047150 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 30 03:28:07.047159 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 30 03:28:07.047166 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 30 03:28:07.047176 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Apr 30 03:28:07.047183 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 30 03:28:07.047192 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Apr 30 03:28:07.047201 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:28:07.047209 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:28:07.047219 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 30 03:28:07.047228 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 30 03:28:07.047237 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 30 03:28:07.047246 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 30 03:28:07.047254 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 30 03:28:07.047264 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 30 03:28:07.047273 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 30 03:28:07.047282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 30 03:28:07.047293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 30 03:28:07.047302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 30 03:28:07.047317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Apr 30 03:28:07.047333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Apr 30 03:28:07.047350 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Apr 30 03:28:07.047365 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Apr 30 03:28:07.047380 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Apr 30 03:28:07.047396 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Apr 30 03:28:07.047412 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 30 03:28:07.047428 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 30 03:28:07.047445 kernel: Zone ranges: Apr 30 03:28:07.047465 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:28:07.047479 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 03:28:07.047493 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:28:07.047507 kernel: Movable zone start for each node Apr 30 03:28:07.047522 kernel: Early memory node ranges Apr 30 03:28:07.047538 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:28:07.047558 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Apr 30 03:28:07.047576 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 30 03:28:07.047593 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:28:07.047619 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 30 03:28:07.047635 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:28:07.047649 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:28:07.047667 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Apr 30 03:28:07.047684 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 30 03:28:07.047698 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 30 03:28:07.047713 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:28:07.047726 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:28:07.047741 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:28:07.047761 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 30 03:28:07.047775 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:28:07.047791 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 30 03:28:07.047806 kernel: Booting paravirtualized kernel on Hyper-V Apr 30 03:28:07.047823 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:28:07.047837 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:28:07.047870 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:28:07.047885 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:28:07.047902 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:28:07.047925 kernel: Hyper-V: PV spinlocks enabled Apr 30 03:28:07.047940 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:28:07.047958 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:07.047975 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:28:07.047991 kernel: random: crng init done Apr 30 03:28:07.048006 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 30 03:28:07.048023 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:28:07.048035 kernel: Fallback order for Node 0: 0 Apr 30 03:28:07.048049 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Apr 30 03:28:07.048071 kernel: Policy zone: Normal Apr 30 03:28:07.048085 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:28:07.048102 kernel: software IO TLB: area num 2. Apr 30 03:28:07.048116 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 310124K reserved, 0K cma-reserved) Apr 30 03:28:07.048130 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:28:07.048144 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:28:07.048157 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:28:07.048171 kernel: Dynamic Preempt: voluntary Apr 30 03:28:07.048185 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:28:07.048202 kernel: rcu: RCU event tracing is enabled. Apr 30 03:28:07.048221 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:28:07.048236 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:28:07.048252 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:28:07.048268 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:28:07.048283 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:28:07.048302 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:28:07.048317 kernel: Using NULL legacy PIC Apr 30 03:28:07.048332 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 30 03:28:07.048347 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:28:07.048362 kernel: Console: colour dummy device 80x25 Apr 30 03:28:07.048377 kernel: printk: console [tty1] enabled Apr 30 03:28:07.048393 kernel: printk: console [ttyS0] enabled Apr 30 03:28:07.048409 kernel: printk: bootconsole [earlyser0] disabled Apr 30 03:28:07.048424 kernel: ACPI: Core revision 20230628 Apr 30 03:28:07.048439 kernel: Failed to register legacy timer interrupt Apr 30 03:28:07.048457 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:28:07.048471 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 03:28:07.048485 kernel: Hyper-V: Using IPI hypercalls Apr 30 03:28:07.048500 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 30 03:28:07.048513 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 30 03:28:07.048528 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 30 03:28:07.048542 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 30 03:28:07.048556 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 30 03:28:07.048569 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 30 03:28:07.048586 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Apr 30 03:28:07.048600 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:28:07.048613 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:28:07.048628 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:28:07.048642 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:28:07.048656 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:28:07.048669 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:28:07.048683 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 03:28:07.048697 kernel: RETBleed: Vulnerable Apr 30 03:28:07.048714 kernel: Speculative Store Bypass: Vulnerable Apr 30 03:28:07.048729 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:28:07.048744 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:28:07.048760 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:28:07.048774 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:28:07.048788 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:28:07.048802 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 03:28:07.048816 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 03:28:07.048830 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 03:28:07.050876 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:28:07.050892 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 30 03:28:07.050905 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 30 03:28:07.050916 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 30 03:28:07.050924 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 30 03:28:07.050934 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:28:07.050943 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:28:07.050953 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:28:07.050962 kernel: landlock: Up and running. Apr 30 03:28:07.050970 kernel: SELinux: Initializing. Apr 30 03:28:07.050978 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:07.050989 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:07.050998 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 03:28:07.051007 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:07.051019 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:07.051030 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:07.051039 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 03:28:07.051047 kernel: signal: max sigframe size: 3632 Apr 30 03:28:07.051058 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:28:07.051067 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:28:07.051078 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:28:07.051086 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:28:07.051096 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:28:07.051106 kernel: .... node #0, CPUs: #1 Apr 30 03:28:07.051115 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 30 03:28:07.051127 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:28:07.051138 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:28:07.051148 kernel: smpboot: Max logical packages: 1 Apr 30 03:28:07.051158 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Apr 30 03:28:07.051168 kernel: devtmpfs: initialized Apr 30 03:28:07.051178 kernel: x86/mm: Memory block size: 128MB Apr 30 03:28:07.051191 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 30 03:28:07.051199 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:28:07.051209 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:28:07.051218 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:28:07.051227 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:28:07.051237 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:28:07.051247 kernel: audit: type=2000 audit(1745983685.027:1): state=initialized audit_enabled=0 res=1 Apr 30 03:28:07.051256 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:28:07.051265 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:28:07.051277 kernel: cpuidle: using governor menu Apr 30 03:28:07.051286 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:28:07.051296 kernel: dca service started, version 1.12.1 Apr 30 03:28:07.051305 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Apr 30 03:28:07.051316 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:28:07.051324 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:28:07.051335 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:28:07.051345 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:28:07.051354 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:28:07.051368 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:28:07.051376 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:28:07.051387 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:28:07.051396 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:28:07.051406 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:28:07.051415 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:28:07.051425 kernel: ACPI: Interpreter enabled Apr 30 03:28:07.051435 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:28:07.051444 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:28:07.051458 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:28:07.051466 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 03:28:07.051477 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 30 03:28:07.051488 kernel: iommu: Default domain type: Translated Apr 30 03:28:07.051498 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:28:07.051508 kernel: efivars: Registered efivars operations Apr 30 03:28:07.051519 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:28:07.051529 kernel: PCI: System does not support PCI Apr 30 03:28:07.051539 kernel: vgaarb: loaded Apr 30 03:28:07.051549 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 30 03:28:07.051561 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:28:07.051569 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:28:07.051579 kernel: pnp: PnP ACPI init Apr 30 03:28:07.051588 kernel: pnp: PnP ACPI: found 3 devices Apr 30 03:28:07.051598 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:28:07.051607 kernel: NET: Registered PF_INET protocol family Apr 30 03:28:07.051617 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:28:07.051626 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 30 03:28:07.051637 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:28:07.051648 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:28:07.051657 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 03:28:07.051667 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 30 03:28:07.051676 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:28:07.051686 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:28:07.051694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:28:07.051705 kernel: NET: Registered PF_XDP protocol family Apr 30 03:28:07.051713 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:28:07.051726 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 03:28:07.051735 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Apr 30 03:28:07.051746 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:28:07.051753 kernel: Initialise system trusted keyrings Apr 30 03:28:07.051764 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 30 03:28:07.051772 kernel: Key type asymmetric registered Apr 30 03:28:07.051783 kernel: Asymmetric key parser 'x509' registered Apr 30 03:28:07.051791 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:28:07.051803 kernel: io scheduler mq-deadline registered Apr 30 03:28:07.051813 kernel: io scheduler kyber registered Apr 30 03:28:07.051821 kernel: io scheduler bfq registered Apr 30 03:28:07.051832 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:28:07.051840 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:28:07.051872 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:28:07.051883 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 03:28:07.051893 kernel: i8042: PNP: No PS/2 controller found. Apr 30 03:28:07.052033 kernel: rtc_cmos 00:02: registered as rtc0 Apr 30 03:28:07.052140 kernel: rtc_cmos 00:02: setting system clock to 2025-04-30T03:28:06 UTC (1745983686) Apr 30 03:28:07.052233 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 30 03:28:07.052247 kernel: intel_pstate: CPU model not supported Apr 30 03:28:07.052258 kernel: efifb: probing for efifb Apr 30 03:28:07.052269 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 03:28:07.052280 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 03:28:07.052293 kernel: efifb: scrolling: redraw Apr 30 03:28:07.052307 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 03:28:07.052328 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:28:07.052342 kernel: fb0: EFI VGA frame buffer device Apr 30 03:28:07.052356 kernel: pstore: Using crash dump compression: deflate Apr 30 03:28:07.052371 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:28:07.052385 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:28:07.052399 kernel: Segment Routing with IPv6 Apr 30 03:28:07.052413 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:28:07.052428 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:28:07.052442 kernel: Key type dns_resolver registered Apr 30 03:28:07.052459 kernel: IPI shorthand broadcast: enabled Apr 30 03:28:07.052474 kernel: sched_clock: Marking stable (770002900, 43159500)->(1014063100, -200900700) Apr 30 03:28:07.052488 kernel: registered taskstats version 1 Apr 30 03:28:07.052503 kernel: Loading compiled-in X.509 certificates Apr 30 03:28:07.052517 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:28:07.052531 kernel: Key type .fscrypt registered Apr 30 03:28:07.052545 kernel: Key type fscrypt-provisioning registered Apr 30 03:28:07.052560 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:28:07.052575 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:28:07.052592 kernel: ima: No architecture policies found Apr 30 03:28:07.052607 kernel: clk: Disabling unused clocks Apr 30 03:28:07.052622 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:28:07.052637 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:28:07.052652 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:28:07.052666 kernel: Run /init as init process Apr 30 03:28:07.052681 kernel: with arguments: Apr 30 03:28:07.052696 kernel: /init Apr 30 03:28:07.052710 kernel: with environment: Apr 30 03:28:07.052727 kernel: HOME=/ Apr 30 03:28:07.052741 kernel: TERM=linux Apr 30 03:28:07.052756 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:28:07.052774 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:07.052792 systemd[1]: Detected virtualization microsoft. Apr 30 03:28:07.052808 systemd[1]: Detected architecture x86-64. Apr 30 03:28:07.052822 systemd[1]: Running in initrd. Apr 30 03:28:07.052836 systemd[1]: No hostname configured, using default hostname. Apr 30 03:28:07.054889 systemd[1]: Hostname set to <localhost>. Apr 30 03:28:07.054904 systemd[1]: Initializing machine ID from random generator. Apr 30 03:28:07.054913 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:28:07.054927 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:07.054935 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:07.054948 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:28:07.054957 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:07.054969 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:28:07.054979 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:28:07.054992 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:28:07.055001 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:28:07.055013 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:07.055021 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:07.055032 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:07.055042 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:07.055055 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:07.055065 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:07.055075 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:07.055085 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:07.055094 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:28:07.055105 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:28:07.055114 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:07.055125 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:07.055139 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:07.055148 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:07.055160 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:28:07.055168 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:07.055178 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:28:07.055188 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:28:07.055198 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:07.055209 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:07.055238 systemd-journald[176]: Collecting audit messages is disabled. Apr 30 03:28:07.055263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:07.055275 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:07.055284 systemd-journald[176]: Journal started Apr 30 03:28:07.055308 systemd-journald[176]: Runtime Journal (/run/log/journal/da654536e59242c9bbcd9d35c3d32362) is 8.0M, max 158.8M, 150.8M free. Apr 30 03:28:07.064980 systemd-modules-load[177]: Inserted module 'overlay' Apr 30 03:28:07.072330 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:07.077852 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:07.085636 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:28:07.103266 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:28:07.103444 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:07.112882 kernel: Bridge firewalling registered Apr 30 03:28:07.113047 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:07.119125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:07.124905 systemd-modules-load[177]: Inserted module 'br_netfilter' Apr 30 03:28:07.128461 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:07.134824 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:07.155014 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:07.159973 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:07.165812 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:07.166917 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:07.183313 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:07.193205 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:07.201090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:07.203686 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:07.212976 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:28:07.233804 dracut-cmdline[215]: dracut-dracut-053 Apr 30 03:28:07.238178 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:07.264535 systemd-resolved[211]: Positive Trust Anchors: Apr 30 03:28:07.264552 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:07.264606 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:07.288613 systemd-resolved[211]: Defaulting to hostname 'linux'. Apr 30 03:28:07.291995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:07.297463 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:07.320864 kernel: SCSI subsystem initialized Apr 30 03:28:07.329858 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:28:07.340871 kernel: iscsi: registered transport (tcp) Apr 30 03:28:07.362102 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:28:07.362165 kernel: QLogic iSCSI HBA Driver Apr 30 03:28:07.396817 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:07.406966 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:28:07.432719 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:28:07.432780 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:28:07.435851 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:28:07.474864 kernel: raid6: avx512x4 gen() 18335 MB/s Apr 30 03:28:07.493860 kernel: raid6: avx512x2 gen() 18254 MB/s Apr 30 03:28:07.512852 kernel: raid6: avx512x1 gen() 18323 MB/s Apr 30 03:28:07.531856 kernel: raid6: avx2x4 gen() 18287 MB/s Apr 30 03:28:07.550857 kernel: raid6: avx2x2 gen() 18262 MB/s Apr 30 03:28:07.571463 kernel: raid6: avx2x1 gen() 13971 MB/s Apr 30 03:28:07.571512 kernel: raid6: using algorithm avx512x4 gen() 18335 MB/s Apr 30 03:28:07.592715 kernel: raid6: .... xor() 8019 MB/s, rmw enabled Apr 30 03:28:07.592744 kernel: raid6: using avx512x2 recovery algorithm Apr 30 03:28:07.614858 kernel: xor: automatically using best checksumming function avx Apr 30 03:28:07.760867 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:28:07.770076 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:07.778068 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:07.790524 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 30 03:28:07.794924 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:07.811971 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:28:07.825294 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Apr 30 03:28:07.850100 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:07.860116 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:07.898162 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:07.909014 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:28:07.927296 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:07.936601 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:07.942908 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:07.948454 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:07.958011 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:28:07.973871 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:28:07.985908 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:08.009778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:08.015052 kernel: hv_vmbus: Vmbus version:5.2 Apr 30 03:28:08.011037 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:08.021428 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:08.041377 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 03:28:08.041406 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Apr 30 03:28:08.030540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:08.030603 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:08.033735 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:08.051037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:08.075354 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 03:28:08.075399 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 30 03:28:08.075422 kernel: PTP clock support registered Apr 30 03:28:08.079152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:08.080472 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:08.089472 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:28:08.094875 kernel: AES CTR mode by8 optimization enabled Apr 30 03:28:08.098867 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 03:28:08.102076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:08.120567 kernel: scsi host1: storvsc_host_t Apr 30 03:28:08.120621 kernel: scsi host0: storvsc_host_t Apr 30 03:28:08.120642 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 03:28:08.125879 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 03:28:08.125922 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 03:28:08.129595 kernel: hv_vmbus: registering driver hv_utils Apr 30 03:28:08.134931 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 03:28:08.139022 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:28:08.145483 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:08.149686 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 03:28:08.149706 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 03:28:08.232402 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 03:28:08.233415 systemd-resolved[211]: Clock change detected. Flushing caches. Apr 30 03:28:08.237502 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:08.251409 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 03:28:08.257411 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 30 03:28:08.263192 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 03:28:08.284840 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 03:28:08.287477 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:28:08.287495 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 03:28:08.285233 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:08.302934 kernel: hv_netvsc 7c1e5235-47cc-7c1e-5235-47cc7c1e5235 eth0: VF slot 1 added Apr 30 03:28:08.319109 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 03:28:08.341507 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 03:28:08.341645 kernel: hv_vmbus: registering driver hv_pci Apr 30 03:28:08.341661 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:28:08.341775 kernel: hv_pci d1a157bf-6b12-4921-bb6c-0c6f475bdc44: PCI VMBus probing: Using version 0x10004 Apr 30 03:28:08.377357 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 03:28:08.377571 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 03:28:08.377751 kernel: hv_pci d1a157bf-6b12-4921-bb6c-0c6f475bdc44: PCI host bridge to bus 6b12:00 Apr 30 03:28:08.377908 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:08.377932 kernel: pci_bus 6b12:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 30 03:28:08.378142 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:28:08.378333 kernel: pci_bus 6b12:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 03:28:08.378502 kernel: pci 6b12:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 30 03:28:08.378694 kernel: pci 6b12:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:28:08.378872 kernel: pci 6b12:00:02.0: enabling Extended Tags Apr 30 03:28:08.379031 kernel: pci 6b12:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6b12:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 30 03:28:08.379213 kernel: pci_bus 6b12:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 03:28:08.379359 kernel: pci 6b12:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:28:08.541947 kernel: mlx5_core 6b12:00:02.0: enabling device (0000 -> 0002) Apr 30 03:28:08.768211 kernel: mlx5_core 6b12:00:02.0: firmware version: 14.30.5000 Apr 30 03:28:08.768802 kernel: hv_netvsc 7c1e5235-47cc-7c1e-5235-47cc7c1e5235 eth0: VF registering: eth1 Apr 30 03:28:08.769341 kernel: mlx5_core 6b12:00:02.0 eth1: joined to eth0 Apr 30 03:28:08.769563 kernel: mlx5_core 6b12:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 03:28:08.775394 kernel: mlx5_core 6b12:00:02.0 enP27410s1: renamed from eth1 Apr 30 03:28:09.834842 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 03:28:09.902397 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (464) Apr 30 03:28:09.922321 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 03:28:09.931993 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (458) Apr 30 03:28:09.945701 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 03:28:09.948887 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 03:28:09.960507 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 03:28:09.971511 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:28:09.982396 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:09.988389 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:10.993388 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:10.995391 disk-uuid[605]: The operation has completed successfully. Apr 30 03:28:11.082686 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:28:11.082802 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:28:11.101526 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:28:11.107213 sh[691]: Success Apr 30 03:28:11.135420 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:28:11.330752 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:28:11.346474 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:28:11.351009 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:28:11.368431 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:28:11.368501 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:11.371830 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:28:11.374628 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:28:11.377002 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:28:11.672437 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:28:11.677802 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:28:11.687539 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:28:11.691513 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:28:11.708377 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:11.708412 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:11.712912 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:11.732557 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:11.744512 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:11.744164 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:28:11.752311 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:28:11.762517 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:28:11.798143 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:11.808940 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:11.829524 systemd-networkd[875]: lo: Link UP Apr 30 03:28:11.829533 systemd-networkd[875]: lo: Gained carrier Apr 30 03:28:11.831693 systemd-networkd[875]: Enumeration completed Apr 30 03:28:11.831915 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:11.834373 systemd[1]: Reached target network.target - Network. Apr 30 03:28:11.845083 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:11.845091 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:11.906397 kernel: mlx5_core 6b12:00:02.0 enP27410s1: Link up Apr 30 03:28:11.934513 kernel: hv_netvsc 7c1e5235-47cc-7c1e-5235-47cc7c1e5235 eth0: Data path switched to VF: enP27410s1 Apr 30 03:28:11.934972 systemd-networkd[875]: enP27410s1: Link UP Apr 30 03:28:11.935143 systemd-networkd[875]: eth0: Link UP Apr 30 03:28:11.935425 systemd-networkd[875]: eth0: Gained carrier Apr 30 03:28:11.935438 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:11.940556 systemd-networkd[875]: enP27410s1: Gained carrier Apr 30 03:28:11.956453 systemd-networkd[875]: eth0: DHCPv4 address 10.200.8.47/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:28:12.769668 ignition[815]: Ignition 2.19.0 Apr 30 03:28:12.769679 ignition[815]: Stage: fetch-offline Apr 30 03:28:12.769719 ignition[815]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:12.769729 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:12.769850 ignition[815]: parsed url from cmdline: "" Apr 30 03:28:12.769855 ignition[815]: no config URL provided Apr 30 03:28:12.769862 ignition[815]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:12.769873 ignition[815]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:12.769879 ignition[815]: failed to fetch config: resource requires networking Apr 30 03:28:12.770113 ignition[815]: Ignition finished successfully Apr 30 03:28:12.790217 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:12.798520 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:28:12.813033 ignition[884]: Ignition 2.19.0 Apr 30 03:28:12.813043 ignition[884]: Stage: fetch Apr 30 03:28:12.813254 ignition[884]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:12.813264 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:12.813351 ignition[884]: parsed url from cmdline: "" Apr 30 03:28:12.813356 ignition[884]: no config URL provided Apr 30 03:28:12.813380 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:12.813389 ignition[884]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:12.813409 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 03:28:12.885287 ignition[884]: GET result: OK Apr 30 03:28:12.885442 ignition[884]: config has been read from IMDS userdata Apr 30 03:28:12.885470 ignition[884]: parsing config with SHA512: d8d484575d4005e7dfdc855b9ea6be1ede4ab46b7a2848d6ebc167dd6c754a25c6e75ba4be9589e0265ca478221f20d2b9ebc28d15796550e242b6d467760347 Apr 30 03:28:12.890572 unknown[884]: fetched base config from "system" Apr 30 03:28:12.890601 unknown[884]: fetched base config from "system" Apr 30 03:28:12.891305 ignition[884]: fetch: fetch complete Apr 30 03:28:12.890611 unknown[884]: fetched user config from "azure" Apr 30 03:28:12.891315 ignition[884]: fetch: fetch passed Apr 30 03:28:12.893206 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:28:12.891506 ignition[884]: Ignition finished successfully Apr 30 03:28:12.901546 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:28:12.917158 ignition[891]: Ignition 2.19.0 Apr 30 03:28:12.917168 ignition[891]: Stage: kargs Apr 30 03:28:12.917370 ignition[891]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:12.919070 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:28:12.917385 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:12.918206 ignition[891]: kargs: kargs passed Apr 30 03:28:12.930609 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:28:12.918245 ignition[891]: Ignition finished successfully Apr 30 03:28:12.944790 ignition[897]: Ignition 2.19.0 Apr 30 03:28:12.944800 ignition[897]: Stage: disks Apr 30 03:28:12.946617 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:28:12.944990 ignition[897]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:12.950633 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:12.945003 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:12.955754 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:28:12.945855 ignition[897]: disks: disks passed Apr 30 03:28:12.958769 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:12.945892 ignition[897]: Ignition finished successfully Apr 30 03:28:12.966290 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:12.981918 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:12.991594 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:28:13.043945 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 03:28:13.048770 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:28:13.058465 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:28:13.152385 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:28:13.153118 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:28:13.157933 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:13.197567 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:13.207416 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (916) Apr 30 03:28:13.206487 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:28:13.212771 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:28:13.218343 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:28:13.232157 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:13.232196 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:13.232219 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:13.232243 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:13.219540 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:13.240849 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:13.243141 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:28:13.260517 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:28:13.631760 systemd-networkd[875]: eth0: Gained IPv6LL Apr 30 03:28:13.759780 systemd-networkd[875]: enP27410s1: Gained IPv6LL Apr 30 03:28:13.800907 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:28:13.815008 coreos-metadata[918]: Apr 30 03:28:13.814 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 03:28:13.818977 coreos-metadata[918]: Apr 30 03:28:13.817 INFO Fetch successful Apr 30 03:28:13.818977 coreos-metadata[918]: Apr 30 03:28:13.817 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 03:28:13.827228 coreos-metadata[918]: Apr 30 03:28:13.827 INFO Fetch successful Apr 30 03:28:13.833349 coreos-metadata[918]: Apr 30 03:28:13.828 INFO wrote hostname ci-4081.3.3-a-a5554f61da to /sysroot/etc/hostname Apr 30 03:28:13.837448 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:28:13.828856 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:13.844633 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:28:13.849051 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:28:14.645294 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:14.654481 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:28:14.660519 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:28:14.667876 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:14.669060 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:28:14.695178 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:28:14.705762 ignition[1034]: INFO : Ignition 2.19.0 Apr 30 03:28:14.705762 ignition[1034]: INFO : Stage: mount Apr 30 03:28:14.711761 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:14.711761 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:14.711761 ignition[1034]: INFO : mount: mount passed Apr 30 03:28:14.711761 ignition[1034]: INFO : Ignition finished successfully Apr 30 03:28:14.707555 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:28:14.727444 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:28:14.735372 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:14.752382 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1045) Apr 30 03:28:14.752417 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:14.756377 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:14.760505 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:14.765389 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:14.766773 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:14.789014 ignition[1061]: INFO : Ignition 2.19.0 Apr 30 03:28:14.789014 ignition[1061]: INFO : Stage: files Apr 30 03:28:14.792826 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:14.792826 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:14.792826 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:28:14.816834 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:28:14.816834 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:28:14.905511 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:28:14.909416 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:28:14.913490 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:28:14.909792 unknown[1061]: wrote ssh authorized keys file for user: core Apr 30 03:28:14.923629 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:28:14.928380 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Apr 30 03:28:15.027206 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:28:15.135886 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Apr 30 03:28:15.930687 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:28:16.919310 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:28:16.919310 ignition[1061]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:28:16.945617 ignition[1061]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: files passed Apr 30 03:28:16.950682 ignition[1061]: INFO : Ignition finished successfully Apr 30 03:28:16.947910 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:28:16.979534 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:28:16.984511 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:28:16.987473 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:28:16.987557 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:28:17.009893 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:17.009893 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:17.023386 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:17.013331 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:17.015671 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:28:17.031141 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:28:17.055035 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:28:17.055150 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:28:17.062293 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:28:17.067708 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:28:17.074609 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:28:17.081521 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:28:17.094728 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:17.103552 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:28:17.115557 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:17.121333 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:17.127224 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:28:17.129557 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:28:17.129663 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:17.139815 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:28:17.144855 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:28:17.147230 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:28:17.152159 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:17.157769 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:17.163344 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:28:17.168580 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:17.174549 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:28:17.180035 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:28:17.187430 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:28:17.189480 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:28:17.189596 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:17.194460 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:17.198951 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:17.204503 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:28:17.206908 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:17.210177 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:28:17.218618 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:17.226474 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:28:17.226614 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:17.232554 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:28:17.232685 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:28:17.238046 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:28:17.238177 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:17.252672 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:28:17.258949 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:28:17.261493 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:28:17.261671 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:17.267630 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:28:17.267969 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:17.279639 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:28:17.279727 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:28:17.292378 ignition[1114]: INFO : Ignition 2.19.0 Apr 30 03:28:17.292378 ignition[1114]: INFO : Stage: umount Apr 30 03:28:17.292378 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:17.292378 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:17.292378 ignition[1114]: INFO : umount: umount passed Apr 30 03:28:17.312208 ignition[1114]: INFO : Ignition finished successfully Apr 30 03:28:17.293273 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:28:17.293380 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:28:17.297178 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:28:17.297270 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:28:17.303095 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:28:17.303140 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:28:17.308001 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:28:17.308049 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:28:17.312229 systemd[1]: Stopped target network.target - Network. Apr 30 03:28:17.334289 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:28:17.335125 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:17.339432 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:28:17.345949 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:28:17.348633 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:17.354782 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:28:17.359238 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:28:17.361435 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:28:17.361475 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:17.365525 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:28:17.365571 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:17.369939 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:28:17.372049 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:28:17.382664 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:28:17.382727 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:17.387663 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:28:17.392371 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:28:17.395793 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:28:17.396279 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:28:17.396376 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:28:17.399795 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:28:17.399895 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:17.423426 systemd-networkd[875]: eth0: DHCPv6 lease lost Apr 30 03:28:17.425909 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:28:17.426042 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:28:17.429714 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:28:17.429792 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:17.446621 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:28:17.448937 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:28:17.448993 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:17.454309 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:17.460084 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:28:17.460213 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:28:17.475482 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:28:17.475675 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:17.482961 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:28:17.483020 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:17.487929 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:28:17.487985 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:17.491933 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:28:17.492056 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:17.493911 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:28:17.525729 kernel: hv_netvsc 7c1e5235-47cc-7c1e-5235-47cc7c1e5235 eth0: Data path switched from VF: enP27410s1 Apr 30 03:28:17.493957 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:17.500872 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:28:17.500909 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:17.506097 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:28:17.506147 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:17.511166 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:28:17.511207 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:17.518965 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:17.519018 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:17.531553 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:28:17.537934 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:28:17.537992 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:17.543565 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:28:17.543620 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:17.569677 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:28:17.569735 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:17.574654 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:17.574702 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:17.577625 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:28:17.577721 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:28:17.587777 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:28:17.590003 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:28:17.595750 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:28:17.610519 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:28:17.618206 systemd[1]: Switching root. Apr 30 03:28:17.678623 systemd-journald[176]: Journal stopped Apr 30 03:28:07.046641 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:28:07.046667 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:07.046677 kernel: BIOS-provided physical RAM map: Apr 30 03:28:07.046685 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:28:07.046691 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 30 03:28:07.046700 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Apr 30 03:28:07.046708 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Apr 30 03:28:07.046717 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Apr 30 03:28:07.046726 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 30 03:28:07.046732 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 30 03:28:07.046740 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 30 03:28:07.046747 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 30 03:28:07.046753 kernel: printk: bootconsole [earlyser0] enabled Apr 30 03:28:07.046760 kernel: NX (Execute Disable) protection: active Apr 30 03:28:07.046772 kernel: APIC: Static calls initialized Apr 30 03:28:07.046779 kernel: efi: EFI v2.7 by Microsoft Apr 30 03:28:07.046790 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Apr 30 03:28:07.046797 kernel: SMBIOS 3.1.0 present. Apr 30 03:28:07.046804 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Apr 30 03:28:07.046817 kernel: Hypervisor detected: Microsoft Hyper-V Apr 30 03:28:07.046825 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 30 03:28:07.046834 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Apr 30 03:28:07.046840 kernel: Hyper-V: Nested features: 0x1e0101 Apr 30 03:28:07.046864 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 30 03:28:07.046876 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 30 03:28:07.046883 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:28:07.046892 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:28:07.046901 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 30 03:28:07.046908 kernel: tsc: Detected 2593.904 MHz processor Apr 30 03:28:07.046918 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:28:07.046926 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:28:07.046935 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 30 03:28:07.046943 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:28:07.046955 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:28:07.046962 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 30 03:28:07.046970 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 30 03:28:07.046979 kernel: Using GB pages for direct mapping Apr 30 03:28:07.046986 kernel: Secure boot disabled Apr 30 03:28:07.046994 kernel: ACPI: Early table checksum verification disabled Apr 30 03:28:07.047003 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 30 03:28:07.047014 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047025 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047032 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Apr 30 03:28:07.047043 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 30 03:28:07.047051 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047059 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047069 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047078 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047088 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047096 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047105 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:07.047114 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 30 03:28:07.047121 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Apr 30 03:28:07.047132 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 30 03:28:07.047139 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 30 03:28:07.047150 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 30 03:28:07.047159 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 30 03:28:07.047166 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 30 03:28:07.047176 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Apr 30 03:28:07.047183 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 30 03:28:07.047192 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Apr 30 03:28:07.047201 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:28:07.047209 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:28:07.047219 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 30 03:28:07.047228 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 30 03:28:07.047237 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 30 03:28:07.047246 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 30 03:28:07.047254 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 30 03:28:07.047264 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 30 03:28:07.047273 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 30 03:28:07.047282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 30 03:28:07.047293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 30 03:28:07.047302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 30 03:28:07.047317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Apr 30 03:28:07.047333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Apr 30 03:28:07.047350 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Apr 30 03:28:07.047365 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Apr 30 03:28:07.047380 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Apr 30 03:28:07.047396 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Apr 30 03:28:07.047412 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 30 03:28:07.047428 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 30 03:28:07.047445 kernel: Zone ranges: Apr 30 03:28:07.047465 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:28:07.047479 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 03:28:07.047493 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:28:07.047507 kernel: Movable zone start for each node Apr 30 03:28:07.047522 kernel: Early memory node ranges Apr 30 03:28:07.047538 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:28:07.047558 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Apr 30 03:28:07.047576 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 30 03:28:07.047593 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:28:07.047619 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 30 03:28:07.047635 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:28:07.047649 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:28:07.047667 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Apr 30 03:28:07.047684 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 30 03:28:07.047698 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 30 03:28:07.047713 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:28:07.047726 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:28:07.047741 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:28:07.047761 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 30 03:28:07.047775 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:28:07.047791 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 30 03:28:07.047806 kernel: Booting paravirtualized kernel on Hyper-V Apr 30 03:28:07.047823 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:28:07.047837 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:28:07.047870 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:28:07.047885 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:28:07.047902 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:28:07.047925 kernel: Hyper-V: PV spinlocks enabled Apr 30 03:28:07.047940 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:28:07.047958 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:07.047975 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:28:07.047991 kernel: random: crng init done Apr 30 03:28:07.048006 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 30 03:28:07.048023 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:28:07.048035 kernel: Fallback order for Node 0: 0 Apr 30 03:28:07.048049 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Apr 30 03:28:07.048071 kernel: Policy zone: Normal Apr 30 03:28:07.048085 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:28:07.048102 kernel: software IO TLB: area num 2. Apr 30 03:28:07.048116 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 310124K reserved, 0K cma-reserved) Apr 30 03:28:07.048130 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:28:07.048144 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:28:07.048157 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:28:07.048171 kernel: Dynamic Preempt: voluntary Apr 30 03:28:07.048185 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:28:07.048202 kernel: rcu: RCU event tracing is enabled. Apr 30 03:28:07.048221 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:28:07.048236 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:28:07.048252 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:28:07.048268 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:28:07.048283 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:28:07.048302 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:28:07.048317 kernel: Using NULL legacy PIC Apr 30 03:28:07.048332 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 30 03:28:07.048347 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:28:07.048362 kernel: Console: colour dummy device 80x25 Apr 30 03:28:07.048377 kernel: printk: console [tty1] enabled Apr 30 03:28:07.048393 kernel: printk: console [ttyS0] enabled Apr 30 03:28:07.048409 kernel: printk: bootconsole [earlyser0] disabled Apr 30 03:28:07.048424 kernel: ACPI: Core revision 20230628 Apr 30 03:28:07.048439 kernel: Failed to register legacy timer interrupt Apr 30 03:28:07.048457 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:28:07.048471 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 03:28:07.048485 kernel: Hyper-V: Using IPI hypercalls Apr 30 03:28:07.048500 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 30 03:28:07.048513 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 30 03:28:07.048528 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 30 03:28:07.048542 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 30 03:28:07.048556 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 30 03:28:07.048569 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 30 03:28:07.048586 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Apr 30 03:28:07.048600 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:28:07.048613 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:28:07.048628 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:28:07.048642 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:28:07.048656 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:28:07.048669 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:28:07.048683 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 03:28:07.048697 kernel: RETBleed: Vulnerable Apr 30 03:28:07.048714 kernel: Speculative Store Bypass: Vulnerable Apr 30 03:28:07.048729 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:28:07.048744 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:28:07.048760 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:28:07.048774 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:28:07.048788 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:28:07.048802 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 03:28:07.048816 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 03:28:07.048830 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 03:28:07.050876 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:28:07.050892 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 30 03:28:07.050905 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 30 03:28:07.050916 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 30 03:28:07.050924 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 30 03:28:07.050934 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:28:07.050943 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:28:07.050953 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:28:07.050962 kernel: landlock: Up and running. Apr 30 03:28:07.050970 kernel: SELinux: Initializing. Apr 30 03:28:07.050978 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:07.050989 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:07.050998 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 03:28:07.051007 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:07.051019 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:07.051030 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:07.051039 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 03:28:07.051047 kernel: signal: max sigframe size: 3632 Apr 30 03:28:07.051058 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:28:07.051067 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:28:07.051078 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:28:07.051086 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:28:07.051096 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:28:07.051106 kernel: .... node #0, CPUs: #1 Apr 30 03:28:07.051115 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 30 03:28:07.051127 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:28:07.051138 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:28:07.051148 kernel: smpboot: Max logical packages: 1 Apr 30 03:28:07.051158 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Apr 30 03:28:07.051168 kernel: devtmpfs: initialized Apr 30 03:28:07.051178 kernel: x86/mm: Memory block size: 128MB Apr 30 03:28:07.051191 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 30 03:28:07.051199 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:28:07.051209 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:28:07.051218 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:28:07.051227 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:28:07.051237 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:28:07.051247 kernel: audit: type=2000 audit(1745983685.027:1): state=initialized audit_enabled=0 res=1 Apr 30 03:28:07.051256 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:28:07.051265 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:28:07.051277 kernel: cpuidle: using governor menu Apr 30 03:28:07.051286 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:28:07.051296 kernel: dca service started, version 1.12.1 Apr 30 03:28:07.051305 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Apr 30 03:28:07.051316 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:28:07.051324 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:28:07.051335 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:28:07.051345 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:28:07.051354 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:28:07.051368 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:28:07.051376 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:28:07.051387 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:28:07.051396 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:28:07.051406 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:28:07.051415 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:28:07.051425 kernel: ACPI: Interpreter enabled Apr 30 03:28:07.051435 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:28:07.051444 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:28:07.051458 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:28:07.051466 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 03:28:07.051477 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 30 03:28:07.051488 kernel: iommu: Default domain type: Translated Apr 30 03:28:07.051498 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:28:07.051508 kernel: efivars: Registered efivars operations Apr 30 03:28:07.051519 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:28:07.051529 kernel: PCI: System does not support PCI Apr 30 03:28:07.051539 kernel: vgaarb: loaded Apr 30 03:28:07.051549 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 30 03:28:07.051561 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:28:07.051569 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:28:07.051579 kernel: pnp: PnP ACPI init Apr 30 03:28:07.051588 kernel: pnp: PnP ACPI: found 3 devices Apr 30 03:28:07.051598 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:28:07.051607 kernel: NET: Registered PF_INET protocol family Apr 30 03:28:07.051617 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:28:07.051626 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 30 03:28:07.051637 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:28:07.051648 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:28:07.051657 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 03:28:07.051667 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 30 03:28:07.051676 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:28:07.051686 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:28:07.051694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:28:07.051705 kernel: NET: Registered PF_XDP protocol family Apr 30 03:28:07.051713 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:28:07.051726 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 03:28:07.051735 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Apr 30 03:28:07.051746 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:28:07.051753 kernel: Initialise system trusted keyrings Apr 30 03:28:07.051764 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 30 03:28:07.051772 kernel: Key type asymmetric registered Apr 30 03:28:07.051783 kernel: Asymmetric key parser 'x509' registered Apr 30 03:28:07.051791 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:28:07.051803 kernel: io scheduler mq-deadline registered Apr 30 03:28:07.051813 kernel: io scheduler kyber registered Apr 30 03:28:07.051821 kernel: io scheduler bfq registered Apr 30 03:28:07.051832 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:28:07.051840 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:28:07.051872 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:28:07.051883 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 03:28:07.051893 kernel: i8042: PNP: No PS/2 controller found. Apr 30 03:28:07.052033 kernel: rtc_cmos 00:02: registered as rtc0 Apr 30 03:28:07.052140 kernel: rtc_cmos 00:02: setting system clock to 2025-04-30T03:28:06 UTC (1745983686) Apr 30 03:28:07.052233 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 30 03:28:07.052247 kernel: intel_pstate: CPU model not supported Apr 30 03:28:07.052258 kernel: efifb: probing for efifb Apr 30 03:28:07.052269 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 03:28:07.052280 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 03:28:07.052293 kernel: efifb: scrolling: redraw Apr 30 03:28:07.052307 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 03:28:07.052328 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:28:07.052342 kernel: fb0: EFI VGA frame buffer device Apr 30 03:28:07.052356 kernel: pstore: Using crash dump compression: deflate Apr 30 03:28:07.052371 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:28:07.052385 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:28:07.052399 kernel: Segment Routing with IPv6 Apr 30 03:28:07.052413 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:28:07.052428 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:28:07.052442 kernel: Key type dns_resolver registered Apr 30 03:28:07.052459 kernel: IPI shorthand broadcast: enabled Apr 30 03:28:07.052474 kernel: sched_clock: Marking stable (770002900, 43159500)->(1014063100, -200900700) Apr 30 03:28:07.052488 kernel: registered taskstats version 1 Apr 30 03:28:07.052503 kernel: Loading compiled-in X.509 certificates Apr 30 03:28:07.052517 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:28:07.052531 kernel: Key type .fscrypt registered Apr 30 03:28:07.052545 kernel: Key type fscrypt-provisioning registered Apr 30 03:28:07.052560 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:28:07.052575 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:28:07.052592 kernel: ima: No architecture policies found Apr 30 03:28:07.052607 kernel: clk: Disabling unused clocks Apr 30 03:28:07.052622 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:28:07.052637 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:28:07.052652 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:28:07.052666 kernel: Run /init as init process Apr 30 03:28:07.052681 kernel: with arguments: Apr 30 03:28:07.052696 kernel: /init Apr 30 03:28:07.052710 kernel: with environment: Apr 30 03:28:07.052727 kernel: HOME=/ Apr 30 03:28:07.052741 kernel: TERM=linux Apr 30 03:28:07.052756 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:28:07.052774 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:07.052792 systemd[1]: Detected virtualization microsoft. Apr 30 03:28:07.052808 systemd[1]: Detected architecture x86-64. Apr 30 03:28:07.052822 systemd[1]: Running in initrd. Apr 30 03:28:07.052836 systemd[1]: No hostname configured, using default hostname. Apr 30 03:28:07.054889 systemd[1]: Hostname set to <localhost>. Apr 30 03:28:07.054904 systemd[1]: Initializing machine ID from random generator. Apr 30 03:28:07.054913 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:28:07.054927 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:07.054935 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:07.054948 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:28:07.054957 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:07.054969 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:28:07.054979 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:28:07.054992 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:28:07.055001 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:28:07.055013 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:07.055021 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:07.055032 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:07.055042 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:07.055055 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:07.055065 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:07.055075 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:07.055085 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:07.055094 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:28:07.055105 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:28:07.055114 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:07.055125 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:07.055139 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:07.055148 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:07.055160 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:28:07.055168 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:07.055178 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:28:07.055188 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:28:07.055198 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:07.055209 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:07.055238 systemd-journald[176]: Collecting audit messages is disabled. Apr 30 03:28:07.055263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:07.055275 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:07.055284 systemd-journald[176]: Journal started Apr 30 03:28:07.055308 systemd-journald[176]: Runtime Journal (/run/log/journal/da654536e59242c9bbcd9d35c3d32362) is 8.0M, max 158.8M, 150.8M free. Apr 30 03:28:07.064980 systemd-modules-load[177]: Inserted module 'overlay' Apr 30 03:28:07.072330 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:07.077852 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:07.085636 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:28:07.103266 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:28:07.103444 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:07.112882 kernel: Bridge firewalling registered Apr 30 03:28:07.113047 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:07.119125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:07.124905 systemd-modules-load[177]: Inserted module 'br_netfilter' Apr 30 03:28:07.128461 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:07.134824 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:07.155014 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:07.159973 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:07.165812 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:07.166917 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:07.183313 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:07.193205 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:07.201090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:07.203686 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:07.212976 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:28:07.233804 dracut-cmdline[215]: dracut-dracut-053 Apr 30 03:28:07.238178 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:07.264535 systemd-resolved[211]: Positive Trust Anchors: Apr 30 03:28:07.264552 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:07.264606 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:07.288613 systemd-resolved[211]: Defaulting to hostname 'linux'. Apr 30 03:28:07.291995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:07.297463 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:07.320864 kernel: SCSI subsystem initialized Apr 30 03:28:07.329858 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:28:07.340871 kernel: iscsi: registered transport (tcp) Apr 30 03:28:07.362102 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:28:07.362165 kernel: QLogic iSCSI HBA Driver Apr 30 03:28:07.396817 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:07.406966 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:28:07.432719 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:28:07.432780 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:28:07.435851 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:28:07.474864 kernel: raid6: avx512x4 gen() 18335 MB/s Apr 30 03:28:07.493860 kernel: raid6: avx512x2 gen() 18254 MB/s Apr 30 03:28:07.512852 kernel: raid6: avx512x1 gen() 18323 MB/s Apr 30 03:28:07.531856 kernel: raid6: avx2x4 gen() 18287 MB/s Apr 30 03:28:07.550857 kernel: raid6: avx2x2 gen() 18262 MB/s Apr 30 03:28:07.571463 kernel: raid6: avx2x1 gen() 13971 MB/s Apr 30 03:28:07.571512 kernel: raid6: using algorithm avx512x4 gen() 18335 MB/s Apr 30 03:28:07.592715 kernel: raid6: .... xor() 8019 MB/s, rmw enabled Apr 30 03:28:07.592744 kernel: raid6: using avx512x2 recovery algorithm Apr 30 03:28:07.614858 kernel: xor: automatically using best checksumming function avx Apr 30 03:28:07.760867 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:28:07.770076 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:07.778068 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:07.790524 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 30 03:28:07.794924 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:07.811971 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:28:07.825294 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Apr 30 03:28:07.850100 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:07.860116 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:07.898162 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:07.909014 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:28:07.927296 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:07.936601 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:07.942908 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:07.948454 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:07.958011 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:28:07.973871 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:28:07.985908 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:08.009778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:08.015052 kernel: hv_vmbus: Vmbus version:5.2 Apr 30 03:28:08.011037 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:08.021428 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:08.041377 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 03:28:08.041406 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Apr 30 03:28:08.030540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:08.030603 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:08.033735 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:08.051037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:08.075354 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 03:28:08.075399 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 30 03:28:08.075422 kernel: PTP clock support registered Apr 30 03:28:08.079152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:08.080472 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:08.089472 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:28:08.094875 kernel: AES CTR mode by8 optimization enabled Apr 30 03:28:08.098867 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 03:28:08.102076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:08.120567 kernel: scsi host1: storvsc_host_t Apr 30 03:28:08.120621 kernel: scsi host0: storvsc_host_t Apr 30 03:28:08.120642 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 03:28:08.125879 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 03:28:08.125922 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 03:28:08.129595 kernel: hv_vmbus: registering driver hv_utils Apr 30 03:28:08.134931 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 03:28:08.139022 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:28:08.145483 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:08.149686 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 03:28:08.149706 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 03:28:08.232402 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 03:28:08.233415 systemd-resolved[211]: Clock change detected. Flushing caches. Apr 30 03:28:08.237502 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:08.251409 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 03:28:08.257411 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 30 03:28:08.263192 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 03:28:08.284840 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 03:28:08.287477 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:28:08.287495 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 03:28:08.285233 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:08.302934 kernel: hv_netvsc 7c1e5235-47cc-7c1e-5235-47cc7c1e5235 eth0: VF slot 1 added Apr 30 03:28:08.319109 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 03:28:08.341507 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 03:28:08.341645 kernel: hv_vmbus: registering driver hv_pci Apr 30 03:28:08.341661 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:28:08.341775 kernel: hv_pci d1a157bf-6b12-4921-bb6c-0c6f475bdc44: PCI VMBus probing: Using version 0x10004 Apr 30 03:28:08.377357 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 03:28:08.377571 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 03:28:08.377751 kernel: hv_pci d1a157bf-6b12-4921-bb6c-0c6f475bdc44: PCI host bridge to bus 6b12:00 Apr 30 03:28:08.377908 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:08.377932 kernel: pci_bus 6b12:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 30 03:28:08.378142 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:28:08.378333 kernel: pci_bus 6b12:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 03:28:08.378502 kernel: pci 6b12:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 30 03:28:08.378694 kernel: pci 6b12:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:28:08.378872 kernel: pci 6b12:00:02.0: enabling Extended Tags Apr 30 03:28:08.379031 kernel: pci 6b12:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6b12:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 30 03:28:08.379213 kernel: pci_bus 6b12:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 03:28:08.379359 kernel: pci 6b12:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:28:08.541947 kernel: mlx5_core 6b12:00:02.0: enabling device (0000 -> 0002) Apr 30 03:28:08.768211 kernel: mlx5_core 6b12:00:02.0: firmware version: 14.30.5000 Apr 30 03:28:08.768802 kernel: hv_netvsc 7c1e5235-47cc-7c1e-5235-47cc7c1e5235 eth0: VF registering: eth1 Apr 30 03:28:08.769341 kernel: mlx5_core 6b12:00:02.0 eth1: joined to eth0 Apr 30 03:28:08.769563 kernel: mlx5_core 6b12:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 03:28:08.775394 kernel: mlx5_core 6b12:00:02.0 enP27410s1: renamed from eth1 Apr 30 03:28:09.834842 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 03:28:09.902397 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (464) Apr 30 03:28:09.922321 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 03:28:09.931993 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (458) Apr 30 03:28:09.945701 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 03:28:09.948887 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 03:28:09.960507 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 03:28:09.971511 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:28:09.982396 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:09.988389 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:10.993388 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:10.995391 disk-uuid[605]: The operation has completed successfully. Apr 30 03:28:11.082686 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:28:11.082802 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:28:11.101526 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:28:11.107213 sh[691]: Success Apr 30 03:28:11.135420 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:28:11.330752 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:28:11.346474 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:28:11.351009 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:28:11.368431 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:28:11.368501 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:11.371830 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:28:11.374628 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:28:11.377002 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:28:11.672437 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:28:11.677802 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:28:11.687539 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:28:11.691513 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:28:11.708377 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:11.708412 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:11.712912 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:11.732557 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:11.744512 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:11.744164 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:28:11.752311 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:28:11.762517 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:28:11.798143 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:11.808940 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:11.829524 systemd-networkd[875]: lo: Link UP Apr 30 03:28:11.829533 systemd-networkd[875]: lo: Gained carrier Apr 30 03:28:11.831693 systemd-networkd[875]: Enumeration completed Apr 30 03:28:11.831915 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:11.834373 systemd[1]: Reached target network.target - Network. Apr 30 03:28:11.845083 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:11.845091 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:11.906397 kernel: mlx5_core 6b12:00:02.0 enP27410s1: Link up Apr 30 03:28:11.934513 kernel: hv_netvsc 7c1e5235-47cc-7c1e-5235-47cc7c1e5235 eth0: Data path switched to VF: enP27410s1 Apr 30 03:28:11.934972 systemd-networkd[875]: enP27410s1: Link UP Apr 30 03:28:11.935143 systemd-networkd[875]: eth0: Link UP Apr 30 03:28:11.935425 systemd-networkd[875]: eth0: Gained carrier Apr 30 03:28:11.935438 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:11.940556 systemd-networkd[875]: enP27410s1: Gained carrier Apr 30 03:28:11.956453 systemd-networkd[875]: eth0: DHCPv4 address 10.200.8.47/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:28:12.769668 ignition[815]: Ignition 2.19.0 Apr 30 03:28:12.769679 ignition[815]: Stage: fetch-offline Apr 30 03:28:12.769719 ignition[815]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:12.769729 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:12.769850 ignition[815]: parsed url from cmdline: "" Apr 30 03:28:12.769855 ignition[815]: no config URL provided Apr 30 03:28:12.769862 ignition[815]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:12.769873 ignition[815]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:12.769879 ignition[815]: failed to fetch config: resource requires networking Apr 30 03:28:12.770113 ignition[815]: Ignition finished successfully Apr 30 03:28:12.790217 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:12.798520 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:28:12.813033 ignition[884]: Ignition 2.19.0 Apr 30 03:28:12.813043 ignition[884]: Stage: fetch Apr 30 03:28:12.813254 ignition[884]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:12.813264 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:12.813351 ignition[884]: parsed url from cmdline: "" Apr 30 03:28:12.813356 ignition[884]: no config URL provided Apr 30 03:28:12.813380 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:12.813389 ignition[884]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:12.813409 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 03:28:12.885287 ignition[884]: GET result: OK Apr 30 03:28:12.885442 ignition[884]: config has been read from IMDS userdata Apr 30 03:28:12.885470 ignition[884]: parsing config with SHA512: d8d484575d4005e7dfdc855b9ea6be1ede4ab46b7a2848d6ebc167dd6c754a25c6e75ba4be9589e0265ca478221f20d2b9ebc28d15796550e242b6d467760347 Apr 30 03:28:12.890572 unknown[884]: fetched base config from "system" Apr 30 03:28:12.890601 unknown[884]: fetched base config from "system" Apr 30 03:28:12.891305 ignition[884]: fetch: fetch complete Apr 30 03:28:12.890611 unknown[884]: fetched user config from "azure" Apr 30 03:28:12.891315 ignition[884]: fetch: fetch passed Apr 30 03:28:12.893206 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:28:12.891506 ignition[884]: Ignition finished successfully Apr 30 03:28:12.901546 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:28:12.917158 ignition[891]: Ignition 2.19.0 Apr 30 03:28:12.917168 ignition[891]: Stage: kargs Apr 30 03:28:12.917370 ignition[891]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:12.919070 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:28:12.917385 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:12.918206 ignition[891]: kargs: kargs passed Apr 30 03:28:12.930609 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:28:12.918245 ignition[891]: Ignition finished successfully Apr 30 03:28:12.944790 ignition[897]: Ignition 2.19.0 Apr 30 03:28:12.944800 ignition[897]: Stage: disks Apr 30 03:28:12.946617 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:28:12.944990 ignition[897]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:12.950633 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:12.945003 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:12.955754 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:28:12.945855 ignition[897]: disks: disks passed Apr 30 03:28:12.958769 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:12.945892 ignition[897]: Ignition finished successfully Apr 30 03:28:12.966290 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:12.981918 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:12.991594 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:28:13.043945 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 03:28:13.048770 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:28:13.058465 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:28:13.152385 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:28:13.153118 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:28:13.157933 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:13.197567 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:13.207416 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (916) Apr 30 03:28:13.206487 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:28:13.212771 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:28:13.218343 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:28:13.232157 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:13.232196 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:13.232219 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:13.232243 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:13.219540 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:13.240849 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:13.243141 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:28:13.260517 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:28:13.631760 systemd-networkd[875]: eth0: Gained IPv6LL Apr 30 03:28:13.759780 systemd-networkd[875]: enP27410s1: Gained IPv6LL Apr 30 03:28:13.800907 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:28:13.815008 coreos-metadata[918]: Apr 30 03:28:13.814 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 03:28:13.818977 coreos-metadata[918]: Apr 30 03:28:13.817 INFO Fetch successful Apr 30 03:28:13.818977 coreos-metadata[918]: Apr 30 03:28:13.817 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 03:28:13.827228 coreos-metadata[918]: Apr 30 03:28:13.827 INFO Fetch successful Apr 30 03:28:13.833349 coreos-metadata[918]: Apr 30 03:28:13.828 INFO wrote hostname ci-4081.3.3-a-a5554f61da to /sysroot/etc/hostname Apr 30 03:28:13.837448 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:28:13.828856 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:13.844633 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:28:13.849051 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:28:14.645294 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:14.654481 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:28:14.660519 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:28:14.667876 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:14.669060 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:28:14.695178 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:28:14.705762 ignition[1034]: INFO : Ignition 2.19.0 Apr 30 03:28:14.705762 ignition[1034]: INFO : Stage: mount Apr 30 03:28:14.711761 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:14.711761 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:14.711761 ignition[1034]: INFO : mount: mount passed Apr 30 03:28:14.711761 ignition[1034]: INFO : Ignition finished successfully Apr 30 03:28:14.707555 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:28:14.727444 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:28:14.735372 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:14.752382 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1045) Apr 30 03:28:14.752417 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:14.756377 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:14.760505 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:14.765389 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:14.766773 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:14.789014 ignition[1061]: INFO : Ignition 2.19.0 Apr 30 03:28:14.789014 ignition[1061]: INFO : Stage: files Apr 30 03:28:14.792826 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:14.792826 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:14.792826 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:28:14.816834 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:28:14.816834 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:28:14.905511 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:28:14.909416 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:28:14.913490 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:28:14.909792 unknown[1061]: wrote ssh authorized keys file for user: core Apr 30 03:28:14.923629 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:28:14.928380 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Apr 30 03:28:15.027206 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:28:15.135886 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:28:15.141382 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Apr 30 03:28:15.930687 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:28:16.919310 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:28:16.919310 ignition[1061]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:28:16.945617 ignition[1061]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:16.950682 ignition[1061]: INFO : files: files passed Apr 30 03:28:16.950682 ignition[1061]: INFO : Ignition finished successfully Apr 30 03:28:16.947910 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:28:16.979534 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:28:16.984511 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:28:16.987473 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:28:16.987557 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:28:17.009893 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:17.009893 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:17.023386 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:17.013331 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:17.015671 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:28:17.031141 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:28:17.055035 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:28:17.055150 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:28:17.062293 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:28:17.067708 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:28:17.074609 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:28:17.081521 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:28:17.094728 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:17.103552 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:28:17.115557 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:17.121333 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:17.127224 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:28:17.129557 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:28:17.129663 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:17.139815 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:28:17.144855 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:28:17.147230 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:28:17.152159 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:17.157769 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:17.163344 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:28:17.168580 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:17.174549 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:28:17.180035 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:28:17.187430 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:28:17.189480 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:28:17.189596 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:17.194460 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:17.198951 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:17.204503 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:28:17.206908 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:17.210177 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:28:17.218618 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:17.226474 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:28:17.226614 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:17.232554 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:28:17.232685 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:28:17.238046 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:28:17.238177 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:17.252672 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:28:17.258949 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:28:17.261493 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:28:17.261671 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:17.267630 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:28:17.267969 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:17.279639 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:28:17.279727 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:28:17.292378 ignition[1114]: INFO : Ignition 2.19.0 Apr 30 03:28:17.292378 ignition[1114]: INFO : Stage: umount Apr 30 03:28:17.292378 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:17.292378 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:17.292378 ignition[1114]: INFO : umount: umount passed Apr 30 03:28:17.312208 ignition[1114]: INFO : Ignition finished successfully Apr 30 03:28:17.293273 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:28:17.293380 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:28:17.297178 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:28:17.297270 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:28:17.303095 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:28:17.303140 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:28:17.308001 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:28:17.308049 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:28:17.312229 systemd[1]: Stopped target network.target - Network. Apr 30 03:28:17.334289 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:28:17.335125 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:17.339432 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:28:17.345949 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:28:17.348633 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:17.354782 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:28:17.359238 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:28:17.361435 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:28:17.361475 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:17.365525 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:28:17.365571 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:17.369939 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:28:17.372049 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:28:17.382664 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:28:17.382727 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:17.387663 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:28:17.392371 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:28:17.395793 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:28:17.396279 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:28:17.396376 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:28:17.399795 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:28:17.399895 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:17.423426 systemd-networkd[875]: eth0: DHCPv6 lease lost Apr 30 03:28:17.425909 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:28:17.426042 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:28:17.429714 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:28:17.429792 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:17.446621 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:28:17.448937 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:28:17.448993 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:17.454309 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:17.460084 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:28:17.460213 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:28:17.475482 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:28:17.475675 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:17.482961 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:28:17.483020 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:17.487929 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:28:17.487985 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:17.491933 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:28:17.492056 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:17.493911 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:28:17.525729 kernel: hv_netvsc 7c1e5235-47cc-7c1e-5235-47cc7c1e5235 eth0: Data path switched from VF: enP27410s1 Apr 30 03:28:17.493957 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:17.500872 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:28:17.500909 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:17.506097 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:28:17.506147 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:17.511166 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:28:17.511207 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:17.518965 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:17.519018 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:17.531553 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:28:17.537934 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:28:17.537992 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:17.543565 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:28:17.543620 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:17.569677 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:28:17.569735 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:17.574654 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:17.574702 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:17.577625 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:28:17.577721 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:28:17.587777 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:28:17.590003 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:28:17.595750 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:28:17.610519 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:28:17.618206 systemd[1]: Switching root. Apr 30 03:28:17.678623 systemd-journald[176]: Journal stopped Apr 30 03:28:22.610972 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Apr 30 03:28:22.611020 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:28:22.611044 kernel: SELinux: policy capability open_perms=1 Apr 30 03:28:22.611066 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:28:22.611082 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:28:22.611100 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:28:22.611121 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:28:22.611141 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:28:22.611160 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:28:22.611179 kernel: audit: type=1403 audit(1745983699.525:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:28:22.611195 systemd[1]: Successfully loaded SELinux policy in 148.661ms. Apr 30 03:28:22.611211 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.900ms. Apr 30 03:28:22.611226 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:22.611241 systemd[1]: Detected virtualization microsoft. Apr 30 03:28:22.611260 systemd[1]: Detected architecture x86-64. Apr 30 03:28:22.611274 systemd[1]: Detected first boot. Apr 30 03:28:22.611290 systemd[1]: Hostname set to <ci-4081.3.3-a-a5554f61da>. Apr 30 03:28:22.611305 systemd[1]: Initializing machine ID from random generator. Apr 30 03:28:22.611320 zram_generator::config[1157]: No configuration found. Apr 30 03:28:22.611341 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:28:22.611357 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:28:22.611386 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:28:22.611402 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:28:22.611419 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:28:22.611437 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:28:22.611454 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:28:22.611475 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:28:22.611491 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:28:22.611508 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:28:22.611526 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:28:22.611542 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:28:22.611559 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:22.611576 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:22.611619 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:28:22.611640 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:28:22.611657 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:28:22.611673 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:22.611689 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:28:22.611706 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:22.611723 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:28:22.611744 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:28:22.611766 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:22.611791 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:28:22.611807 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:22.611826 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:22.611843 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:22.612121 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:22.612140 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:28:22.612158 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:28:22.612179 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:22.612196 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:22.612214 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:22.612231 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:28:22.612249 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:28:22.612269 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:28:22.612287 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:28:22.612304 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:22.612322 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:28:22.612340 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:28:22.612358 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:28:22.612398 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:28:22.612416 systemd[1]: Reached target machines.target - Containers. Apr 30 03:28:22.612438 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:28:22.612457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:22.612475 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:22.612492 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:28:22.612510 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:22.612528 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:22.612545 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:22.612563 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:28:22.612581 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:22.612602 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:28:22.612620 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:28:22.612638 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:28:22.612656 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:28:22.612674 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:28:22.612692 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:22.612709 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:22.612728 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:28:22.612748 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:28:22.612766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:22.612784 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:28:22.612827 systemd-journald[1256]: Collecting audit messages is disabled. Apr 30 03:28:22.612864 systemd[1]: Stopped verity-setup.service. Apr 30 03:28:22.612885 systemd-journald[1256]: Journal started Apr 30 03:28:22.612917 systemd-journald[1256]: Runtime Journal (/run/log/journal/69db924c35914cb1b715467b2bcf7eb6) is 8.0M, max 158.8M, 150.8M free. Apr 30 03:28:22.628305 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:22.628346 kernel: fuse: init (API version 7.39) Apr 30 03:28:21.910038 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:28:22.055627 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 03:28:22.056000 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:28:22.643210 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:22.643262 kernel: loop: module loaded Apr 30 03:28:22.637999 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:28:22.640730 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:28:22.646749 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:28:22.649092 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:28:22.651972 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:28:22.654687 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:28:22.657234 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:28:22.660792 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:22.664624 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:28:22.664837 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:28:22.667585 kernel: ACPI: bus type drm_connector registered Apr 30 03:28:22.669989 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:22.670142 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:22.672929 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:22.673078 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:22.675678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:22.675822 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:22.679143 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:28:22.679418 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:28:22.682670 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:22.682941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:22.685981 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:22.689205 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:28:22.692817 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:28:22.712784 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:28:22.723117 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:28:22.733425 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:28:22.737718 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:28:22.737822 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:22.742911 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:28:22.755227 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:28:22.760585 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:28:22.763566 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:22.777815 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:28:22.782942 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:28:22.785766 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:22.792501 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:28:22.795144 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:22.796187 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:22.800546 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:28:22.805606 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:22.815612 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:22.819063 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:28:22.825498 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:28:22.828837 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:28:22.835484 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:28:22.843288 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:28:22.848943 systemd-journald[1256]: Time spent on flushing to /var/log/journal/69db924c35914cb1b715467b2bcf7eb6 is 27.791ms for 963 entries. Apr 30 03:28:22.848943 systemd-journald[1256]: System Journal (/var/log/journal/69db924c35914cb1b715467b2bcf7eb6) is 8.0M, max 2.6G, 2.6G free. Apr 30 03:28:22.936026 systemd-journald[1256]: Received client request to flush runtime journal. Apr 30 03:28:22.936078 kernel: loop0: detected capacity change from 0 to 140768 Apr 30 03:28:22.851411 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:28:22.856607 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:28:22.867848 udevadm[1303]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:28:22.928393 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:22.937098 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:28:22.944320 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Apr 30 03:28:22.944342 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Apr 30 03:28:22.949432 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:22.960519 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:28:22.967988 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:28:22.968745 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:28:23.133278 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:28:23.145986 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:23.163332 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Apr 30 03:28:23.163356 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Apr 30 03:28:23.167275 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:23.256390 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:28:23.298382 kernel: loop1: detected capacity change from 0 to 142488 Apr 30 03:28:23.645598 kernel: loop2: detected capacity change from 0 to 218376 Apr 30 03:28:23.683385 kernel: loop3: detected capacity change from 0 to 31056 Apr 30 03:28:23.968995 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:28:23.977536 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:23.999106 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Apr 30 03:28:24.050383 kernel: loop4: detected capacity change from 0 to 140768 Apr 30 03:28:24.063402 kernel: loop5: detected capacity change from 0 to 142488 Apr 30 03:28:24.075379 kernel: loop6: detected capacity change from 0 to 218376 Apr 30 03:28:24.081385 kernel: loop7: detected capacity change from 0 to 31056 Apr 30 03:28:24.085070 (sd-merge)[1324]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 30 03:28:24.085595 (sd-merge)[1324]: Merged extensions into '/usr'. Apr 30 03:28:24.088870 systemd[1]: Reloading requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:28:24.088885 systemd[1]: Reloading... Apr 30 03:28:24.189390 zram_generator::config[1367]: No configuration found. Apr 30 03:28:24.317425 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:28:24.343442 kernel: hv_vmbus: registering driver hv_balloon Apr 30 03:28:24.353419 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 30 03:28:24.381394 kernel: hv_vmbus: registering driver hyperv_fb Apr 30 03:28:24.411385 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 30 03:28:24.417382 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 30 03:28:24.425522 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:28:24.430867 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:28:24.575645 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:24.644386 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1330) Apr 30 03:28:24.719255 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:28:24.719995 systemd[1]: Reloading finished in 630 ms. Apr 30 03:28:24.800629 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:24.804879 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:28:24.875296 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 03:28:24.892319 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Apr 30 03:28:24.898542 systemd[1]: Starting ensure-sysext.service... Apr 30 03:28:24.902501 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:28:24.914552 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:24.928405 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:24.941540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:24.951482 systemd[1]: Reloading requested from client PID 1481 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:28:24.951501 systemd[1]: Reloading... Apr 30 03:28:24.976825 systemd-tmpfiles[1485]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:28:24.977325 systemd-tmpfiles[1485]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:28:24.978596 systemd-tmpfiles[1485]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:28:24.979020 systemd-tmpfiles[1485]: ACLs are not supported, ignoring. Apr 30 03:28:24.979106 systemd-tmpfiles[1485]: ACLs are not supported, ignoring. Apr 30 03:28:25.001088 systemd-tmpfiles[1485]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:25.001100 systemd-tmpfiles[1485]: Skipping /boot Apr 30 03:28:25.026804 systemd-tmpfiles[1485]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:25.027534 systemd-tmpfiles[1485]: Skipping /boot Apr 30 03:28:25.051466 zram_generator::config[1521]: No configuration found. Apr 30 03:28:25.177963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:25.268408 systemd[1]: Reloading finished in 316 ms. Apr 30 03:28:25.283061 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:28:25.290776 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:28:25.294435 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:25.298046 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:25.309216 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:25.313674 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:25.344623 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:28:25.348070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:25.349431 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:28:25.360765 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:25.365515 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:25.374454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:25.378839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:25.381419 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:28:25.389459 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:25.397616 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:28:25.414780 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:28:25.417623 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:25.421963 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:25.423209 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:25.427313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:25.428676 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:25.432271 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:25.432493 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:25.446958 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:28:25.458257 lvm[1592]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:25.458748 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:25.459101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:25.470714 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:25.477840 augenrules[1609]: No rules Apr 30 03:28:25.485592 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:25.493274 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:25.497225 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:25.497532 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:25.499751 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:28:25.509565 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:25.517626 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:28:25.521808 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:28:25.525625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:25.525802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:25.532173 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:25.532529 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:25.536476 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:25.536639 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:25.551051 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:25.556751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:25.557207 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:25.564862 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:28:25.574597 lvm[1632]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:25.578675 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:25.589615 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:25.609299 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:25.615912 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:25.622551 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:25.622810 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:28:25.625879 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:25.628196 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:25.629155 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:25.632483 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:25.632673 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:25.636524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:25.636682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:25.644680 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:28:25.646755 systemd-networkd[1484]: lo: Link UP Apr 30 03:28:25.646987 systemd-networkd[1484]: lo: Gained carrier Apr 30 03:28:25.648689 systemd[1]: Finished ensure-sysext.service. Apr 30 03:28:25.649923 systemd-networkd[1484]: Enumeration completed Apr 30 03:28:25.650346 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:25.650425 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:25.650942 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:25.654086 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:25.654297 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:25.670185 systemd-resolved[1601]: Positive Trust Anchors: Apr 30 03:28:25.670201 systemd-resolved[1601]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:25.670255 systemd-resolved[1601]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:25.671926 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:28:25.674739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:25.674809 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:25.695165 systemd-resolved[1601]: Using system hostname 'ci-4081.3.3-a-a5554f61da'. Apr 30 03:28:25.710385 kernel: mlx5_core 6b12:00:02.0 enP27410s1: Link up Apr 30 03:28:25.730262 kernel: hv_netvsc 7c1e5235-47cc-7c1e-5235-47cc7c1e5235 eth0: Data path switched to VF: enP27410s1 Apr 30 03:28:25.730563 systemd-networkd[1484]: enP27410s1: Link UP Apr 30 03:28:25.730722 systemd-networkd[1484]: eth0: Link UP Apr 30 03:28:25.730733 systemd-networkd[1484]: eth0: Gained carrier Apr 30 03:28:25.730756 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:25.731857 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:25.734950 systemd[1]: Reached target network.target - Network. Apr 30 03:28:25.738376 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:25.738674 systemd-networkd[1484]: enP27410s1: Gained carrier Apr 30 03:28:25.762402 systemd-networkd[1484]: eth0: DHCPv4 address 10.200.8.47/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:28:25.934707 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:28:25.938326 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:28:27.391740 systemd-networkd[1484]: enP27410s1: Gained IPv6LL Apr 30 03:28:27.519588 systemd-networkd[1484]: eth0: Gained IPv6LL Apr 30 03:28:27.522958 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:28:27.526978 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:28:27.956439 ldconfig[1288]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:28:27.967376 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:28:27.975548 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:28:27.985015 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:28:27.988086 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:27.990962 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:28:27.993840 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:28:27.996890 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:28:27.999533 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:28:28.002521 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:28:28.005455 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:28:28.005490 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:28.007673 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:28.010540 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:28:28.014321 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:28:28.029969 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:28:28.032963 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:28:28.035580 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:28.037897 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:28.040379 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:28.040410 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:28.048584 systemd[1]: Starting chronyd.service - NTP client/server... Apr 30 03:28:28.054495 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:28:28.063632 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:28:28.069864 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:28:28.075568 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:28:28.086186 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:28:28.088679 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:28:28.088720 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 30 03:28:28.089818 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 30 03:28:28.092983 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 30 03:28:28.095035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:28.102590 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:28:28.106677 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:28:28.113491 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:28:28.114472 (chronyd)[1652]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 30 03:28:28.137825 jq[1656]: false Apr 30 03:28:28.131765 KVP[1660]: KVP starting; pid is:1660 Apr 30 03:28:28.125552 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:28:28.134533 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:28:28.138594 chronyd[1669]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 30 03:28:28.141403 chronyd[1669]: Timezone right/UTC failed leap second check, ignoring Apr 30 03:28:28.141570 chronyd[1669]: Loaded seccomp filter (level 2) Apr 30 03:28:28.152455 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:28:28.155487 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:28:28.155952 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:28:28.161551 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:28:28.168403 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:28:28.177677 systemd[1]: Started chronyd.service - NTP client/server. Apr 30 03:28:28.188784 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:28:28.189676 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:28:28.192389 extend-filesystems[1658]: Found loop4 Apr 30 03:28:28.192389 extend-filesystems[1658]: Found loop5 Apr 30 03:28:28.192389 extend-filesystems[1658]: Found loop6 Apr 30 03:28:28.192389 extend-filesystems[1658]: Found loop7 Apr 30 03:28:28.192389 extend-filesystems[1658]: Found sda Apr 30 03:28:28.192389 extend-filesystems[1658]: Found sda1 Apr 30 03:28:28.192389 extend-filesystems[1658]: Found sda2 Apr 30 03:28:28.192389 extend-filesystems[1658]: Found sda3 Apr 30 03:28:28.192389 extend-filesystems[1658]: Found usr Apr 30 03:28:28.192389 extend-filesystems[1658]: Found sda4 Apr 30 03:28:28.192389 extend-filesystems[1658]: Found sda6 Apr 30 03:28:28.192389 extend-filesystems[1658]: Found sda7 Apr 30 03:28:28.192389 extend-filesystems[1658]: Found sda9 Apr 30 03:28:28.192389 extend-filesystems[1658]: Checking size of /dev/sda9 Apr 30 03:28:28.356661 update_engine[1673]: I20250430 03:28:28.264539 1673 main.cc:92] Flatcar Update Engine starting Apr 30 03:28:28.356661 update_engine[1673]: I20250430 03:28:28.305523 1673 update_check_scheduler.cc:74] Next update check in 7m3s Apr 30 03:28:28.243892 dbus-daemon[1655]: [system] SELinux support is enabled Apr 30 03:28:28.357157 jq[1675]: true Apr 30 03:28:28.357277 extend-filesystems[1658]: Old size kept for /dev/sda9 Apr 30 03:28:28.357277 extend-filesystems[1658]: Found sr0 Apr 30 03:28:28.195844 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:28:28.363352 coreos-metadata[1654]: Apr 30 03:28:28.362 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 03:28:28.196471 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:28:28.363793 tar[1686]: linux-amd64/LICENSE Apr 30 03:28:28.363793 tar[1686]: linux-amd64/helm Apr 30 03:28:28.211158 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:28:28.211432 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:28:28.215717 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:28:28.364532 jq[1691]: true Apr 30 03:28:28.244060 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:28:28.255866 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:28:28.255899 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:28:28.260807 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:28:28.260831 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:28:28.286749 (ntainerd)[1693]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:28:28.299808 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:28:28.311548 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:28:28.368150 coreos-metadata[1654]: Apr 30 03:28:28.367 INFO Fetch successful Apr 30 03:28:28.368150 coreos-metadata[1654]: Apr 30 03:28:28.367 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 30 03:28:28.322162 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:28:28.322421 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:28:28.374990 coreos-metadata[1654]: Apr 30 03:28:28.373 INFO Fetch successful Apr 30 03:28:28.376313 coreos-metadata[1654]: Apr 30 03:28:28.376 INFO Fetching http://168.63.129.16/machine/fd8d0817-4027-4b36-b48e-7d606acaed75/ce78d8e1%2Db826%2D437a%2D8fb3%2D69205f4753ed.%5Fci%2D4081.3.3%2Da%2Da5554f61da?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 30 03:28:28.376400 kernel: hv_utils: KVP IC version 4.0 Apr 30 03:28:28.376342 KVP[1660]: KVP LIC Version: 3.1 Apr 30 03:28:28.382376 coreos-metadata[1654]: Apr 30 03:28:28.382 INFO Fetch successful Apr 30 03:28:28.382452 coreos-metadata[1654]: Apr 30 03:28:28.382 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 30 03:28:28.403536 coreos-metadata[1654]: Apr 30 03:28:28.403 INFO Fetch successful Apr 30 03:28:28.414219 systemd-logind[1671]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:28:28.419835 systemd-logind[1671]: New seat seat0. Apr 30 03:28:28.424468 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:28:28.505784 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:28:28.509535 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:28:28.542753 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1735) Apr 30 03:28:28.594989 bash[1741]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:28:28.598784 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:28:28.605716 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 03:28:28.641493 locksmithd[1708]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:28:28.889637 sshd_keygen[1690]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:28:28.921984 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:28:28.937610 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:28:28.950691 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 30 03:28:28.962744 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:28:28.962999 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:28:28.981616 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:28:28.999549 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 30 03:28:29.015011 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:28:29.026762 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:28:29.036466 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:28:29.041589 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:28:29.253553 tar[1686]: linux-amd64/README.md Apr 30 03:28:29.265160 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:28:29.544562 containerd[1693]: time="2025-04-30T03:28:29.544462000Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:28:29.586406 containerd[1693]: time="2025-04-30T03:28:29.586335700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:29.588104 containerd[1693]: time="2025-04-30T03:28:29.588066600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:29.588104 containerd[1693]: time="2025-04-30T03:28:29.588096500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:28:29.588233 containerd[1693]: time="2025-04-30T03:28:29.588115300Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:28:29.588303 containerd[1693]: time="2025-04-30T03:28:29.588279300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:28:29.588343 containerd[1693]: time="2025-04-30T03:28:29.588306400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:29.589543 containerd[1693]: time="2025-04-30T03:28:29.588427100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:29.589543 containerd[1693]: time="2025-04-30T03:28:29.588450400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:29.589543 containerd[1693]: time="2025-04-30T03:28:29.588651400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:29.589543 containerd[1693]: time="2025-04-30T03:28:29.588671800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:29.589543 containerd[1693]: time="2025-04-30T03:28:29.588690600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:29.589543 containerd[1693]: time="2025-04-30T03:28:29.588704600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:29.589543 containerd[1693]: time="2025-04-30T03:28:29.588799100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:29.589543 containerd[1693]: time="2025-04-30T03:28:29.589033000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:29.589543 containerd[1693]: time="2025-04-30T03:28:29.589173800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:29.589543 containerd[1693]: time="2025-04-30T03:28:29.589194000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:28:29.589543 containerd[1693]: time="2025-04-30T03:28:29.589312700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:28:29.589873 containerd[1693]: time="2025-04-30T03:28:29.589388000Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:28:29.602929 containerd[1693]: time="2025-04-30T03:28:29.602888400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:28:29.603038 containerd[1693]: time="2025-04-30T03:28:29.602950300Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:28:29.603038 containerd[1693]: time="2025-04-30T03:28:29.602973200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:28:29.603038 containerd[1693]: time="2025-04-30T03:28:29.602992100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:28:29.603038 containerd[1693]: time="2025-04-30T03:28:29.603013400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:28:29.603198 containerd[1693]: time="2025-04-30T03:28:29.603158100Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603489400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603619900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603642000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603661000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603679500Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603706900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603727400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603760000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603780400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603797800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603814300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603830800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603861000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.604693 containerd[1693]: time="2025-04-30T03:28:29.603881700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.603897600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.603914100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.603930300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.603948900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.603964900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.603982400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.603999500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.604019600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.604036100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.604053000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.604070400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.604092400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.604133300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.604150600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605181 containerd[1693]: time="2025-04-30T03:28:29.604166400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:28:29.605727 containerd[1693]: time="2025-04-30T03:28:29.604217000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:28:29.605727 containerd[1693]: time="2025-04-30T03:28:29.604239500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:28:29.605727 containerd[1693]: time="2025-04-30T03:28:29.604254000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:28:29.605727 containerd[1693]: time="2025-04-30T03:28:29.604270400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:28:29.605727 containerd[1693]: time="2025-04-30T03:28:29.604284000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605727 containerd[1693]: time="2025-04-30T03:28:29.604300000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:28:29.605727 containerd[1693]: time="2025-04-30T03:28:29.604317300Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:28:29.605727 containerd[1693]: time="2025-04-30T03:28:29.604331100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:28:29.605989 containerd[1693]: time="2025-04-30T03:28:29.604728000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:28:29.605989 containerd[1693]: time="2025-04-30T03:28:29.604810700Z" level=info msg="Connect containerd service" Apr 30 03:28:29.605989 containerd[1693]: time="2025-04-30T03:28:29.604868800Z" level=info msg="using legacy CRI server" Apr 30 03:28:29.605989 containerd[1693]: time="2025-04-30T03:28:29.604879000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:28:29.605989 containerd[1693]: time="2025-04-30T03:28:29.605006000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:28:29.605989 containerd[1693]: time="2025-04-30T03:28:29.605669600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:28:29.606343 containerd[1693]: time="2025-04-30T03:28:29.606016700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:28:29.606343 containerd[1693]: time="2025-04-30T03:28:29.606075600Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:28:29.606343 containerd[1693]: time="2025-04-30T03:28:29.606177600Z" level=info msg="Start subscribing containerd event" Apr 30 03:28:29.606343 containerd[1693]: time="2025-04-30T03:28:29.606229400Z" level=info msg="Start recovering state" Apr 30 03:28:29.606343 containerd[1693]: time="2025-04-30T03:28:29.606318900Z" level=info msg="Start event monitor" Apr 30 03:28:29.606343 containerd[1693]: time="2025-04-30T03:28:29.606332800Z" level=info msg="Start snapshots syncer" Apr 30 03:28:29.607751 containerd[1693]: time="2025-04-30T03:28:29.606347400Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:28:29.607751 containerd[1693]: time="2025-04-30T03:28:29.606357400Z" level=info msg="Start streaming server" Apr 30 03:28:29.607751 containerd[1693]: time="2025-04-30T03:28:29.606440000Z" level=info msg="containerd successfully booted in 0.064741s" Apr 30 03:28:29.606525 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:28:29.798598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:29.802556 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:28:29.805769 systemd[1]: Startup finished in 716ms (firmware) + 26.357s (loader) + 907ms (kernel) + 12.612s (initrd) + 10.427s (userspace) = 51.020s. Apr 30 03:28:29.816540 (kubelet)[1818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:30.033304 login[1800]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 03:28:30.039723 login[1801]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 03:28:30.048484 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:28:30.054873 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:28:30.060422 systemd-logind[1671]: New session 1 of user core. Apr 30 03:28:30.068503 systemd-logind[1671]: New session 2 of user core. Apr 30 03:28:30.079604 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:28:30.088802 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:28:30.093393 (systemd)[1829]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:28:30.284528 systemd[1829]: Queued start job for default target default.target. Apr 30 03:28:30.289069 systemd[1829]: Created slice app.slice - User Application Slice. Apr 30 03:28:30.289216 systemd[1829]: Reached target paths.target - Paths. Apr 30 03:28:30.289235 systemd[1829]: Reached target timers.target - Timers. Apr 30 03:28:30.292682 systemd[1829]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:28:30.310463 systemd[1829]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:28:30.310580 systemd[1829]: Reached target sockets.target - Sockets. Apr 30 03:28:30.310599 systemd[1829]: Reached target basic.target - Basic System. Apr 30 03:28:30.310646 systemd[1829]: Reached target default.target - Main User Target. Apr 30 03:28:30.310678 systemd[1829]: Startup finished in 208ms. Apr 30 03:28:30.311257 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:28:30.318566 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:28:30.320664 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:28:30.505332 kubelet[1818]: E0430 03:28:30.505282 1818 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:30.507857 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:30.508044 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:30.508423 systemd[1]: kubelet.service: Consumed 1.038s CPU time. Apr 30 03:28:30.804301 waagent[1798]: 2025-04-30T03:28:30.804211Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 30 03:28:30.838863 waagent[1798]: 2025-04-30T03:28:30.805824Z INFO Daemon Daemon OS: flatcar 4081.3.3 Apr 30 03:28:30.838863 waagent[1798]: 2025-04-30T03:28:30.806296Z INFO Daemon Daemon Python: 3.11.9 Apr 30 03:28:30.838863 waagent[1798]: 2025-04-30T03:28:30.806933Z INFO Daemon Daemon Run daemon Apr 30 03:28:30.838863 waagent[1798]: 2025-04-30T03:28:30.807332Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.3' Apr 30 03:28:30.838863 waagent[1798]: 2025-04-30T03:28:30.807853Z INFO Daemon Daemon Using waagent for provisioning Apr 30 03:28:30.838863 waagent[1798]: 2025-04-30T03:28:30.808453Z INFO Daemon Daemon Activate resource disk Apr 30 03:28:30.838863 waagent[1798]: 2025-04-30T03:28:30.808811Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 30 03:28:30.838863 waagent[1798]: 2025-04-30T03:28:30.812708Z INFO Daemon Daemon Found device: None Apr 30 03:28:30.838863 waagent[1798]: 2025-04-30T03:28:30.813754Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 30 03:28:30.838863 waagent[1798]: 2025-04-30T03:28:30.814234Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 30 03:28:30.838863 waagent[1798]: 2025-04-30T03:28:30.816185Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 03:28:30.838863 waagent[1798]: 2025-04-30T03:28:30.816959Z INFO Daemon Daemon Running default provisioning handler Apr 30 03:28:30.842058 waagent[1798]: 2025-04-30T03:28:30.841909Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 30 03:28:30.855321 waagent[1798]: 2025-04-30T03:28:30.843665Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 30 03:28:30.855321 waagent[1798]: 2025-04-30T03:28:30.844475Z INFO Daemon Daemon cloud-init is enabled: False Apr 30 03:28:30.855321 waagent[1798]: 2025-04-30T03:28:30.845262Z INFO Daemon Daemon Copying ovf-env.xml Apr 30 03:28:30.928388 waagent[1798]: 2025-04-30T03:28:30.923944Z INFO Daemon Daemon Successfully mounted dvd Apr 30 03:28:30.961813 waagent[1798]: 2025-04-30T03:28:30.955064Z INFO Daemon Daemon Detect protocol endpoint Apr 30 03:28:30.961813 waagent[1798]: 2025-04-30T03:28:30.956233Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 03:28:30.961813 waagent[1798]: 2025-04-30T03:28:30.957261Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 30 03:28:30.961813 waagent[1798]: 2025-04-30T03:28:30.958131Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 30 03:28:30.961813 waagent[1798]: 2025-04-30T03:28:30.959130Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 30 03:28:30.961813 waagent[1798]: 2025-04-30T03:28:30.959858Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 30 03:28:30.969459 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 30 03:28:30.984841 waagent[1798]: 2025-04-30T03:28:30.984795Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 30 03:28:30.992268 waagent[1798]: 2025-04-30T03:28:30.986042Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 30 03:28:30.992268 waagent[1798]: 2025-04-30T03:28:30.986747Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 30 03:28:31.067916 waagent[1798]: 2025-04-30T03:28:31.067799Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 30 03:28:31.071111 waagent[1798]: 2025-04-30T03:28:31.071057Z INFO Daemon Daemon Forcing an update of the goal state. Apr 30 03:28:31.075202 waagent[1798]: 2025-04-30T03:28:31.075149Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 03:28:31.090011 waagent[1798]: 2025-04-30T03:28:31.089950Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Apr 30 03:28:31.106087 waagent[1798]: 2025-04-30T03:28:31.091772Z INFO Daemon Apr 30 03:28:31.106087 waagent[1798]: 2025-04-30T03:28:31.092410Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: bfad1b1e-b492-4077-ad06-c828fd3f2856 eTag: 3948098541264354113 source: Fabric] Apr 30 03:28:31.106087 waagent[1798]: 2025-04-30T03:28:31.093644Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 30 03:28:31.106087 waagent[1798]: 2025-04-30T03:28:31.094430Z INFO Daemon Apr 30 03:28:31.106087 waagent[1798]: 2025-04-30T03:28:31.095414Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 30 03:28:31.109504 waagent[1798]: 2025-04-30T03:28:31.109465Z INFO Daemon Daemon Downloading artifacts profile blob Apr 30 03:28:31.284826 waagent[1798]: 2025-04-30T03:28:31.284760Z INFO Daemon Downloaded certificate {'thumbprint': 'CD5618FED209AE165279B352CDB8246424D06A72', 'hasPrivateKey': True} Apr 30 03:28:31.289987 waagent[1798]: 2025-04-30T03:28:31.289929Z INFO Daemon Downloaded certificate {'thumbprint': 'CD074ACA2AD94BCC030067C9BE04C48910B9D16A', 'hasPrivateKey': False} Apr 30 03:28:31.294581 waagent[1798]: 2025-04-30T03:28:31.294529Z INFO Daemon Fetch goal state completed Apr 30 03:28:31.346878 waagent[1798]: 2025-04-30T03:28:31.346750Z INFO Daemon Daemon Starting provisioning Apr 30 03:28:31.353260 waagent[1798]: 2025-04-30T03:28:31.347903Z INFO Daemon Daemon Handle ovf-env.xml. Apr 30 03:28:31.353260 waagent[1798]: 2025-04-30T03:28:31.348783Z INFO Daemon Daemon Set hostname [ci-4081.3.3-a-a5554f61da] Apr 30 03:28:31.365315 waagent[1798]: 2025-04-30T03:28:31.365248Z INFO Daemon Daemon Publish hostname [ci-4081.3.3-a-a5554f61da] Apr 30 03:28:31.372885 waagent[1798]: 2025-04-30T03:28:31.366722Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 30 03:28:31.372885 waagent[1798]: 2025-04-30T03:28:31.367583Z INFO Daemon Daemon Primary interface is [eth0] Apr 30 03:28:31.390835 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:31.390843 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:31.390879 systemd-networkd[1484]: eth0: DHCP lease lost Apr 30 03:28:31.391933 waagent[1798]: 2025-04-30T03:28:31.391878Z INFO Daemon Daemon Create user account if not exists Apr 30 03:28:31.393581 waagent[1798]: 2025-04-30T03:28:31.393238Z INFO Daemon Daemon User core already exists, skip useradd Apr 30 03:28:31.394051 waagent[1798]: 2025-04-30T03:28:31.394010Z INFO Daemon Daemon Configure sudoer Apr 30 03:28:31.395154 waagent[1798]: 2025-04-30T03:28:31.395109Z INFO Daemon Daemon Configure sshd Apr 30 03:28:31.397004 waagent[1798]: 2025-04-30T03:28:31.396960Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 30 03:28:31.397625 waagent[1798]: 2025-04-30T03:28:31.397586Z INFO Daemon Daemon Deploy ssh public key. Apr 30 03:28:31.410511 systemd-networkd[1484]: eth0: DHCPv6 lease lost Apr 30 03:28:31.446405 systemd-networkd[1484]: eth0: DHCPv4 address 10.200.8.47/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:28:32.545734 waagent[1798]: 2025-04-30T03:28:32.545670Z INFO Daemon Daemon Provisioning complete Apr 30 03:28:32.560093 waagent[1798]: 2025-04-30T03:28:32.560040Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 30 03:28:32.566635 waagent[1798]: 2025-04-30T03:28:32.561161Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 30 03:28:32.566635 waagent[1798]: 2025-04-30T03:28:32.561959Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 30 03:28:32.681925 waagent[1887]: 2025-04-30T03:28:32.681841Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 30 03:28:32.682296 waagent[1887]: 2025-04-30T03:28:32.681979Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.3 Apr 30 03:28:32.682296 waagent[1887]: 2025-04-30T03:28:32.682062Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 30 03:28:32.739248 waagent[1887]: 2025-04-30T03:28:32.739174Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 30 03:28:32.739490 waagent[1887]: 2025-04-30T03:28:32.739436Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 03:28:32.739605 waagent[1887]: 2025-04-30T03:28:32.739552Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 03:28:32.747777 waagent[1887]: 2025-04-30T03:28:32.747718Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 03:28:32.753232 waagent[1887]: 2025-04-30T03:28:32.753180Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Apr 30 03:28:32.753678 waagent[1887]: 2025-04-30T03:28:32.753624Z INFO ExtHandler Apr 30 03:28:32.753762 waagent[1887]: 2025-04-30T03:28:32.753715Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 35510ab7-0a2b-4acf-86fa-5c22c730ccde eTag: 3948098541264354113 source: Fabric] Apr 30 03:28:32.754072 waagent[1887]: 2025-04-30T03:28:32.754021Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 30 03:28:32.754676 waagent[1887]: 2025-04-30T03:28:32.754621Z INFO ExtHandler Apr 30 03:28:32.754740 waagent[1887]: 2025-04-30T03:28:32.754705Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 30 03:28:32.758056 waagent[1887]: 2025-04-30T03:28:32.758011Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 30 03:28:32.833264 waagent[1887]: 2025-04-30T03:28:32.833154Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CD5618FED209AE165279B352CDB8246424D06A72', 'hasPrivateKey': True} Apr 30 03:28:32.833619 waagent[1887]: 2025-04-30T03:28:32.833566Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CD074ACA2AD94BCC030067C9BE04C48910B9D16A', 'hasPrivateKey': False} Apr 30 03:28:32.834020 waagent[1887]: 2025-04-30T03:28:32.833970Z INFO ExtHandler Fetch goal state completed Apr 30 03:28:32.848192 waagent[1887]: 2025-04-30T03:28:32.848134Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1887 Apr 30 03:28:32.848332 waagent[1887]: 2025-04-30T03:28:32.848287Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 30 03:28:32.849839 waagent[1887]: 2025-04-30T03:28:32.849784Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.3', '', 'Flatcar Container Linux by Kinvolk'] Apr 30 03:28:32.850205 waagent[1887]: 2025-04-30T03:28:32.850155Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 30 03:28:32.882221 waagent[1887]: 2025-04-30T03:28:32.882178Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 30 03:28:32.882448 waagent[1887]: 2025-04-30T03:28:32.882397Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 30 03:28:32.890039 waagent[1887]: 2025-04-30T03:28:32.889814Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 30 03:28:32.896483 systemd[1]: Reloading requested from client PID 1902 ('systemctl') (unit waagent.service)... Apr 30 03:28:32.896499 systemd[1]: Reloading... Apr 30 03:28:32.997383 zram_generator::config[1936]: No configuration found. Apr 30 03:28:33.112185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:33.192345 systemd[1]: Reloading finished in 295 ms. Apr 30 03:28:33.217080 waagent[1887]: 2025-04-30T03:28:33.216705Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 30 03:28:33.225535 systemd[1]: Reloading requested from client PID 1993 ('systemctl') (unit waagent.service)... Apr 30 03:28:33.225551 systemd[1]: Reloading... Apr 30 03:28:33.308416 zram_generator::config[2023]: No configuration found. Apr 30 03:28:33.439714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:33.520380 systemd[1]: Reloading finished in 294 ms. Apr 30 03:28:33.548385 waagent[1887]: 2025-04-30T03:28:33.546764Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 30 03:28:33.548385 waagent[1887]: 2025-04-30T03:28:33.546957Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 30 03:28:34.760876 waagent[1887]: 2025-04-30T03:28:34.760780Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 30 03:28:34.764881 waagent[1887]: 2025-04-30T03:28:34.764804Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 30 03:28:34.765834 waagent[1887]: 2025-04-30T03:28:34.765767Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 30 03:28:34.766336 waagent[1887]: 2025-04-30T03:28:34.766265Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 30 03:28:34.766543 waagent[1887]: 2025-04-30T03:28:34.766482Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 03:28:34.766709 waagent[1887]: 2025-04-30T03:28:34.766652Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 03:28:34.766843 waagent[1887]: 2025-04-30T03:28:34.766790Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 03:28:34.767094 waagent[1887]: 2025-04-30T03:28:34.767034Z INFO EnvHandler ExtHandler Configure routes Apr 30 03:28:34.767219 waagent[1887]: 2025-04-30T03:28:34.767159Z INFO EnvHandler ExtHandler Gateway:None Apr 30 03:28:34.767336 waagent[1887]: 2025-04-30T03:28:34.767279Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 03:28:34.767766 waagent[1887]: 2025-04-30T03:28:34.767706Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 30 03:28:34.767855 waagent[1887]: 2025-04-30T03:28:34.767777Z INFO EnvHandler ExtHandler Routes:None Apr 30 03:28:34.767969 waagent[1887]: 2025-04-30T03:28:34.767897Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 30 03:28:34.768905 waagent[1887]: 2025-04-30T03:28:34.768851Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 30 03:28:34.769396 waagent[1887]: 2025-04-30T03:28:34.769281Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 30 03:28:34.769541 waagent[1887]: 2025-04-30T03:28:34.769480Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 30 03:28:34.770123 waagent[1887]: 2025-04-30T03:28:34.770068Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 30 03:28:34.770358 waagent[1887]: 2025-04-30T03:28:34.770305Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 30 03:28:34.770358 waagent[1887]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 30 03:28:34.770358 waagent[1887]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Apr 30 03:28:34.770358 waagent[1887]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 30 03:28:34.770358 waagent[1887]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 30 03:28:34.770358 waagent[1887]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 03:28:34.770358 waagent[1887]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 03:28:34.778415 waagent[1887]: 2025-04-30T03:28:34.777605Z INFO ExtHandler ExtHandler Apr 30 03:28:34.778415 waagent[1887]: 2025-04-30T03:28:34.777700Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e3b60cca-3cb3-47cb-9cc8-10fe076a161c correlation 9d344d08-fc85-4f69-9003-3a88a01c6581 created: 2025-04-30T03:27:28.407219Z] Apr 30 03:28:34.778415 waagent[1887]: 2025-04-30T03:28:34.778110Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 30 03:28:34.780921 waagent[1887]: 2025-04-30T03:28:34.780867Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Apr 30 03:28:34.812542 waagent[1887]: 2025-04-30T03:28:34.812480Z INFO MonitorHandler ExtHandler Network interfaces: Apr 30 03:28:34.812542 waagent[1887]: Executing ['ip', '-a', '-o', 'link']: Apr 30 03:28:34.812542 waagent[1887]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 30 03:28:34.812542 waagent[1887]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:47:cc brd ff:ff:ff:ff:ff:ff Apr 30 03:28:34.812542 waagent[1887]: 3: enP27410s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:47:cc brd ff:ff:ff:ff:ff:ff\ altname enP27410p0s2 Apr 30 03:28:34.812542 waagent[1887]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 30 03:28:34.812542 waagent[1887]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 30 03:28:34.812542 waagent[1887]: 2: eth0 inet 10.200.8.47/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 30 03:28:34.812542 waagent[1887]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 30 03:28:34.812542 waagent[1887]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 30 03:28:34.812542 waagent[1887]: 2: eth0 inet6 fe80::7e1e:52ff:fe35:47cc/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 03:28:34.812542 waagent[1887]: 3: enP27410s1 inet6 fe80::7e1e:52ff:fe35:47cc/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 03:28:34.834028 waagent[1887]: 2025-04-30T03:28:34.833962Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 83C4BC67-10B2-46B9-B572-FE603F106D49;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 30 03:28:34.899014 waagent[1887]: 2025-04-30T03:28:34.898945Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 30 03:28:34.899014 waagent[1887]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:34.899014 waagent[1887]: pkts bytes target prot opt in out source destination Apr 30 03:28:34.899014 waagent[1887]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:34.899014 waagent[1887]: pkts bytes target prot opt in out source destination Apr 30 03:28:34.899014 waagent[1887]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:34.899014 waagent[1887]: pkts bytes target prot opt in out source destination Apr 30 03:28:34.899014 waagent[1887]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 03:28:34.899014 waagent[1887]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 03:28:34.899014 waagent[1887]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 03:28:34.902204 waagent[1887]: 2025-04-30T03:28:34.902147Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 30 03:28:34.902204 waagent[1887]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:34.902204 waagent[1887]: pkts bytes target prot opt in out source destination Apr 30 03:28:34.902204 waagent[1887]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:34.902204 waagent[1887]: pkts bytes target prot opt in out source destination Apr 30 03:28:34.902204 waagent[1887]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:34.902204 waagent[1887]: pkts bytes target prot opt in out source destination Apr 30 03:28:34.902204 waagent[1887]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 03:28:34.902204 waagent[1887]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 03:28:34.902204 waagent[1887]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 03:28:34.902651 waagent[1887]: 2025-04-30T03:28:34.902476Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 30 03:28:40.517475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:28:40.522579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:40.619802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:40.633052 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:41.273958 kubelet[2123]: E0430 03:28:41.273863 2123 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:41.277521 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:41.277710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:45.297384 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:28:45.305654 systemd[1]: Started sshd@0-10.200.8.47:22-10.200.16.10:37918.service - OpenSSH per-connection server daemon (10.200.16.10:37918). Apr 30 03:28:45.997549 sshd[2131]: Accepted publickey for core from 10.200.16.10 port 37918 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:45.999254 sshd[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:46.004420 systemd-logind[1671]: New session 3 of user core. Apr 30 03:28:46.013507 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:28:46.552144 systemd[1]: Started sshd@1-10.200.8.47:22-10.200.16.10:37930.service - OpenSSH per-connection server daemon (10.200.16.10:37930). Apr 30 03:28:47.174630 sshd[2136]: Accepted publickey for core from 10.200.16.10 port 37930 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:47.176099 sshd[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:47.180605 systemd-logind[1671]: New session 4 of user core. Apr 30 03:28:47.190682 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:28:47.621334 sshd[2136]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:47.625628 systemd[1]: sshd@1-10.200.8.47:22-10.200.16.10:37930.service: Deactivated successfully. Apr 30 03:28:47.627836 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:28:47.628760 systemd-logind[1671]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:28:47.629806 systemd-logind[1671]: Removed session 4. Apr 30 03:28:47.736648 systemd[1]: Started sshd@2-10.200.8.47:22-10.200.16.10:37940.service - OpenSSH per-connection server daemon (10.200.16.10:37940). Apr 30 03:28:48.358629 sshd[2143]: Accepted publickey for core from 10.200.16.10 port 37940 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:48.360315 sshd[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:48.364716 systemd-logind[1671]: New session 5 of user core. Apr 30 03:28:48.372638 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:28:48.801708 sshd[2143]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:48.805031 systemd[1]: sshd@2-10.200.8.47:22-10.200.16.10:37940.service: Deactivated successfully. Apr 30 03:28:48.807256 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:28:48.808979 systemd-logind[1671]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:28:48.809874 systemd-logind[1671]: Removed session 5. Apr 30 03:28:48.912299 systemd[1]: Started sshd@3-10.200.8.47:22-10.200.16.10:37954.service - OpenSSH per-connection server daemon (10.200.16.10:37954). Apr 30 03:28:49.547181 sshd[2150]: Accepted publickey for core from 10.200.16.10 port 37954 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:49.548926 sshd[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:49.553750 systemd-logind[1671]: New session 6 of user core. Apr 30 03:28:49.560713 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:28:49.993499 sshd[2150]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:49.997996 systemd[1]: sshd@3-10.200.8.47:22-10.200.16.10:37954.service: Deactivated successfully. Apr 30 03:28:49.999916 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:28:50.000618 systemd-logind[1671]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:28:50.001499 systemd-logind[1671]: Removed session 6. Apr 30 03:28:50.108969 systemd[1]: Started sshd@4-10.200.8.47:22-10.200.16.10:40594.service - OpenSSH per-connection server daemon (10.200.16.10:40594). Apr 30 03:28:50.731376 sshd[2157]: Accepted publickey for core from 10.200.16.10 port 40594 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:50.733093 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:50.738468 systemd-logind[1671]: New session 7 of user core. Apr 30 03:28:50.745756 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:28:51.225977 sudo[2160]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:28:51.226452 sudo[2160]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:51.250909 sudo[2160]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:51.357767 sshd[2157]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:51.361163 systemd[1]: sshd@4-10.200.8.47:22-10.200.16.10:40594.service: Deactivated successfully. Apr 30 03:28:51.363137 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:28:51.364093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:28:51.365562 systemd-logind[1671]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:28:51.370583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:51.371796 systemd-logind[1671]: Removed session 7. Apr 30 03:28:51.469472 systemd[1]: Started sshd@5-10.200.8.47:22-10.200.16.10:40604.service - OpenSSH per-connection server daemon (10.200.16.10:40604). Apr 30 03:28:51.490938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:51.495318 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:51.936978 chronyd[1669]: Selected source PHC0 Apr 30 03:28:52.096306 sshd[2170]: Accepted publickey for core from 10.200.16.10 port 40604 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:52.098728 sshd[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:52.105599 systemd-logind[1671]: New session 8 of user core. Apr 30 03:28:52.109649 kubelet[2175]: E0430 03:28:52.108242 2175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:52.109659 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:28:52.110037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:52.110209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:52.441588 sudo[2184]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:28:52.441939 sudo[2184]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:52.445108 sudo[2184]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:52.449875 sudo[2183]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:28:52.450200 sudo[2183]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:52.461675 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:52.464113 auditctl[2187]: No rules Apr 30 03:28:52.464491 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:28:52.464682 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:52.467129 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:52.491921 augenrules[2205]: No rules Apr 30 03:28:52.493150 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:52.494355 sudo[2183]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:52.597808 sshd[2170]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:52.600968 systemd[1]: sshd@5-10.200.8.47:22-10.200.16.10:40604.service: Deactivated successfully. Apr 30 03:28:52.602760 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:28:52.604263 systemd-logind[1671]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:28:52.605207 systemd-logind[1671]: Removed session 8. Apr 30 03:28:52.708049 systemd[1]: Started sshd@6-10.200.8.47:22-10.200.16.10:40610.service - OpenSSH per-connection server daemon (10.200.16.10:40610). Apr 30 03:28:53.334317 sshd[2213]: Accepted publickey for core from 10.200.16.10 port 40610 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:53.336087 sshd[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:53.341215 systemd-logind[1671]: New session 9 of user core. Apr 30 03:28:53.347729 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:28:53.679347 sudo[2216]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:28:53.679715 sudo[2216]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:54.839662 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:28:54.839751 (dockerd)[2231]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:28:56.209028 dockerd[2231]: time="2025-04-30T03:28:56.208967667Z" level=info msg="Starting up" Apr 30 03:28:56.537830 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2387971060-merged.mount: Deactivated successfully. Apr 30 03:28:56.604830 dockerd[2231]: time="2025-04-30T03:28:56.604792167Z" level=info msg="Loading containers: start." Apr 30 03:28:56.794390 kernel: Initializing XFRM netlink socket Apr 30 03:28:56.900270 systemd-networkd[1484]: docker0: Link UP Apr 30 03:28:56.923811 dockerd[2231]: time="2025-04-30T03:28:56.923774667Z" level=info msg="Loading containers: done." Apr 30 03:28:56.995560 dockerd[2231]: time="2025-04-30T03:28:56.995519667Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:28:56.995711 dockerd[2231]: time="2025-04-30T03:28:56.995630867Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:28:56.995793 dockerd[2231]: time="2025-04-30T03:28:56.995766967Z" level=info msg="Daemon has completed initialization" Apr 30 03:28:57.044070 dockerd[2231]: time="2025-04-30T03:28:57.043813667Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:28:57.044309 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:28:58.148039 containerd[1693]: time="2025-04-30T03:28:58.147974167Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 03:28:58.813120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2045538738.mount: Deactivated successfully. Apr 30 03:29:00.319492 containerd[1693]: time="2025-04-30T03:29:00.319442679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:00.321385 containerd[1693]: time="2025-04-30T03:29:00.321329180Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682887" Apr 30 03:29:00.324260 containerd[1693]: time="2025-04-30T03:29:00.324106682Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:00.328885 containerd[1693]: time="2025-04-30T03:29:00.328824785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:00.332389 containerd[1693]: time="2025-04-30T03:29:00.332127487Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.18410442s" Apr 30 03:29:00.332389 containerd[1693]: time="2025-04-30T03:29:00.332171987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Apr 30 03:29:00.334871 containerd[1693]: time="2025-04-30T03:29:00.334789088Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 03:29:01.792172 containerd[1693]: time="2025-04-30T03:29:01.792112220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:01.794121 containerd[1693]: time="2025-04-30T03:29:01.794063221Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779597" Apr 30 03:29:01.799270 containerd[1693]: time="2025-04-30T03:29:01.799223424Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:01.807399 containerd[1693]: time="2025-04-30T03:29:01.807349229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:01.808502 containerd[1693]: time="2025-04-30T03:29:01.808313529Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.473326741s" Apr 30 03:29:01.808502 containerd[1693]: time="2025-04-30T03:29:01.808350629Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Apr 30 03:29:01.808837 containerd[1693]: time="2025-04-30T03:29:01.808812530Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 03:29:02.267214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 03:29:02.272607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:02.397666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:02.402060 (kubelet)[2431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:29:02.459396 kubelet[2431]: E0430 03:29:02.459328 2431 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:29:02.462306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:29:02.462483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:29:03.662948 containerd[1693]: time="2025-04-30T03:29:03.662899988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:03.664662 containerd[1693]: time="2025-04-30T03:29:03.664598489Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169946" Apr 30 03:29:03.668499 containerd[1693]: time="2025-04-30T03:29:03.668436291Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:03.673048 containerd[1693]: time="2025-04-30T03:29:03.672998694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:03.674482 containerd[1693]: time="2025-04-30T03:29:03.673947094Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.865097964s" Apr 30 03:29:03.674482 containerd[1693]: time="2025-04-30T03:29:03.673993095Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Apr 30 03:29:03.674874 containerd[1693]: time="2025-04-30T03:29:03.674782995Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 03:29:04.884193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084382472.mount: Deactivated successfully. Apr 30 03:29:05.391964 containerd[1693]: time="2025-04-30T03:29:05.391912075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:05.393583 containerd[1693]: time="2025-04-30T03:29:05.393528176Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917864" Apr 30 03:29:05.396544 containerd[1693]: time="2025-04-30T03:29:05.396492778Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:05.402837 containerd[1693]: time="2025-04-30T03:29:05.402784582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:05.403734 containerd[1693]: time="2025-04-30T03:29:05.403338382Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.728518587s" Apr 30 03:29:05.403734 containerd[1693]: time="2025-04-30T03:29:05.403396282Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Apr 30 03:29:05.404111 containerd[1693]: time="2025-04-30T03:29:05.404053982Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 03:29:05.932231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2097460609.mount: Deactivated successfully. Apr 30 03:29:07.210796 containerd[1693]: time="2025-04-30T03:29:07.210741814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.214444 containerd[1693]: time="2025-04-30T03:29:07.214380716Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Apr 30 03:29:07.218608 containerd[1693]: time="2025-04-30T03:29:07.218559818Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.223350 containerd[1693]: time="2025-04-30T03:29:07.223294021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.224508 containerd[1693]: time="2025-04-30T03:29:07.224334021Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.820245639s" Apr 30 03:29:07.224508 containerd[1693]: time="2025-04-30T03:29:07.224391922Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Apr 30 03:29:07.225280 containerd[1693]: time="2025-04-30T03:29:07.225110722Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 03:29:07.713103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount715466513.mount: Deactivated successfully. Apr 30 03:29:07.732960 containerd[1693]: time="2025-04-30T03:29:07.732906612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.736758 containerd[1693]: time="2025-04-30T03:29:07.736616714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Apr 30 03:29:07.739088 containerd[1693]: time="2025-04-30T03:29:07.739037815Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.744176 containerd[1693]: time="2025-04-30T03:29:07.744125018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.745007 containerd[1693]: time="2025-04-30T03:29:07.744854919Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 519.710697ms" Apr 30 03:29:07.745007 containerd[1693]: time="2025-04-30T03:29:07.744893419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 03:29:07.745638 containerd[1693]: time="2025-04-30T03:29:07.745603019Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 03:29:08.308695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049236560.mount: Deactivated successfully. Apr 30 03:29:10.498217 containerd[1693]: time="2025-04-30T03:29:10.498166584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:10.501294 containerd[1693]: time="2025-04-30T03:29:10.501148390Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Apr 30 03:29:10.505341 containerd[1693]: time="2025-04-30T03:29:10.505299197Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:10.511068 containerd[1693]: time="2025-04-30T03:29:10.510916108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:10.512639 containerd[1693]: time="2025-04-30T03:29:10.512474111Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.766831392s" Apr 30 03:29:10.512639 containerd[1693]: time="2025-04-30T03:29:10.512514411Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Apr 30 03:29:12.486936 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Apr 30 03:29:12.517066 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 03:29:12.526471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:12.655519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:12.662513 (kubelet)[2588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:29:12.715768 kubelet[2588]: E0430 03:29:12.715719 2588 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:29:12.718590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:29:12.718769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:29:13.683025 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:13.696459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:13.728722 systemd[1]: Reloading requested from client PID 2603 ('systemctl') (unit session-9.scope)... Apr 30 03:29:13.728739 systemd[1]: Reloading... Apr 30 03:29:13.851407 zram_generator::config[2639]: No configuration found. Apr 30 03:29:13.880946 update_engine[1673]: I20250430 03:29:13.880402 1673 update_attempter.cc:509] Updating boot flags... Apr 30 03:29:13.979189 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:29:14.061151 systemd[1]: Reloading finished in 331 ms. Apr 30 03:29:14.437762 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:29:14.437926 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:29:14.438602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:14.447578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:14.490387 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2716) Apr 30 03:29:14.611383 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2720) Apr 30 03:29:15.215387 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2720) Apr 30 03:29:15.941667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:15.947335 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:29:15.983454 kubelet[2803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:15.983454 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 03:29:15.983454 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:15.983838 kubelet[2803]: I0430 03:29:15.983518 2803 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:29:16.289709 kubelet[2803]: I0430 03:29:16.289669 2803 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 03:29:16.289709 kubelet[2803]: I0430 03:29:16.289697 2803 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:29:16.290030 kubelet[2803]: I0430 03:29:16.290008 2803 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 03:29:17.026299 kubelet[2803]: E0430 03:29:17.026235 2803 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:17.027442 kubelet[2803]: I0430 03:29:17.027263 2803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:29:17.038253 kubelet[2803]: E0430 03:29:17.038212 2803 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:29:17.038253 kubelet[2803]: I0430 03:29:17.038241 2803 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:29:17.041639 kubelet[2803]: I0430 03:29:17.041601 2803 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:29:17.042632 kubelet[2803]: I0430 03:29:17.042584 2803 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:29:17.042817 kubelet[2803]: I0430 03:29:17.042630 2803 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-a5554f61da","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:29:17.042965 kubelet[2803]: I0430 03:29:17.042823 2803 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:29:17.042965 kubelet[2803]: I0430 03:29:17.042837 2803 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 03:29:17.043047 kubelet[2803]: I0430 03:29:17.042986 2803 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:17.045989 kubelet[2803]: I0430 03:29:17.045966 2803 kubelet.go:446] "Attempting to sync node with API server" Apr 30 03:29:17.046075 kubelet[2803]: I0430 03:29:17.045990 2803 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:29:17.046075 kubelet[2803]: I0430 03:29:17.046018 2803 kubelet.go:352] "Adding apiserver pod source" Apr 30 03:29:17.046075 kubelet[2803]: I0430 03:29:17.046031 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:29:17.054401 kubelet[2803]: W0430 03:29:17.054215 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused Apr 30 03:29:17.054401 kubelet[2803]: E0430 03:29:17.054275 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:17.054690 kubelet[2803]: W0430 03:29:17.054597 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-a5554f61da&limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused Apr 30 03:29:17.054690 kubelet[2803]: E0430 03:29:17.054655 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-a5554f61da&limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:17.054798 kubelet[2803]: I0430 03:29:17.054742 2803 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:29:17.055378 kubelet[2803]: I0430 03:29:17.055240 2803 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:29:17.056029 kubelet[2803]: W0430 03:29:17.056007 2803 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:29:17.058135 kubelet[2803]: I0430 03:29:17.057941 2803 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 03:29:17.058135 kubelet[2803]: I0430 03:29:17.057981 2803 server.go:1287] "Started kubelet" Apr 30 03:29:17.058567 kubelet[2803]: I0430 03:29:17.058537 2803 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:29:17.060403 kubelet[2803]: I0430 03:29:17.059662 2803 server.go:490] "Adding debug handlers to kubelet server" Apr 30 03:29:17.061489 kubelet[2803]: I0430 03:29:17.061441 2803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:29:17.061828 kubelet[2803]: I0430 03:29:17.061810 2803 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:29:17.062464 kubelet[2803]: I0430 03:29:17.062449 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:29:17.063836 kubelet[2803]: E0430 03:29:17.062133 2803 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.47:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-a-a5554f61da.183afaf9f70aa306 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-a5554f61da,UID:ci-4081.3.3-a-a5554f61da,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-a5554f61da,},FirstTimestamp:2025-04-30 03:29:17.057958662 +0000 UTC m=+1.107310484,LastTimestamp:2025-04-30 03:29:17.057958662 +0000 UTC m=+1.107310484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-a5554f61da,}" Apr 30 03:29:17.065304 kubelet[2803]: I0430 03:29:17.064885 2803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:29:17.069111 kubelet[2803]: I0430 03:29:17.069093 2803 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 03:29:17.069646 kubelet[2803]: E0430 03:29:17.069622 2803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-a5554f61da\" not found" Apr 30 03:29:17.069826 kubelet[2803]: I0430 03:29:17.069812 2803 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:29:17.069951 kubelet[2803]: I0430 03:29:17.069940 2803 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:29:17.070683 kubelet[2803]: W0430 03:29:17.070644 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused Apr 30 03:29:17.070810 kubelet[2803]: E0430 03:29:17.070791 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:17.070987 kubelet[2803]: E0430 03:29:17.070969 2803 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:29:17.071247 kubelet[2803]: I0430 03:29:17.071231 2803 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:29:17.071423 kubelet[2803]: I0430 03:29:17.071400 2803 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:29:17.071922 kubelet[2803]: E0430 03:29:17.071894 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-a5554f61da?timeout=10s\": dial tcp 10.200.8.47:6443: connect: connection refused" interval="200ms" Apr 30 03:29:17.072893 kubelet[2803]: I0430 03:29:17.072873 2803 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:29:17.084301 kubelet[2803]: I0430 03:29:17.084251 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:29:17.085249 kubelet[2803]: I0430 03:29:17.085221 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:29:17.085249 kubelet[2803]: I0430 03:29:17.085243 2803 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 03:29:17.085355 kubelet[2803]: I0430 03:29:17.085263 2803 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 03:29:17.085355 kubelet[2803]: I0430 03:29:17.085272 2803 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 03:29:17.085355 kubelet[2803]: E0430 03:29:17.085327 2803 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:29:17.093386 kubelet[2803]: W0430 03:29:17.093235 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused Apr 30 03:29:17.093386 kubelet[2803]: E0430 03:29:17.093280 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:17.102940 kubelet[2803]: I0430 03:29:17.102915 2803 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 03:29:17.102940 kubelet[2803]: I0430 03:29:17.102935 2803 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 03:29:17.103050 kubelet[2803]: I0430 03:29:17.102954 2803 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:17.111118 kubelet[2803]: I0430 03:29:17.111095 2803 policy_none.go:49] "None policy: Start" Apr 30 03:29:17.111118 kubelet[2803]: I0430 03:29:17.111116 2803 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 03:29:17.111235 kubelet[2803]: I0430 03:29:17.111129 2803 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:29:17.119254 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:29:17.127181 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:29:17.130403 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:29:17.141050 kubelet[2803]: I0430 03:29:17.141028 2803 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:29:17.141340 kubelet[2803]: I0430 03:29:17.141326 2803 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:29:17.141472 kubelet[2803]: I0430 03:29:17.141439 2803 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:29:17.141764 kubelet[2803]: I0430 03:29:17.141749 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:29:17.142904 kubelet[2803]: E0430 03:29:17.142884 2803 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 03:29:17.143048 kubelet[2803]: E0430 03:29:17.143030 2803 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-a-a5554f61da\" not found" Apr 30 03:29:17.196775 systemd[1]: Created slice kubepods-burstable-podf22980ed5b69c8692f843088521cce25.slice - libcontainer container kubepods-burstable-podf22980ed5b69c8692f843088521cce25.slice. Apr 30 03:29:17.214599 kubelet[2803]: E0430 03:29:17.214560 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-a5554f61da\" not found" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.217380 systemd[1]: Created slice kubepods-burstable-podf7c8587db4e50708fba0c82140af4522.slice - libcontainer container kubepods-burstable-podf7c8587db4e50708fba0c82140af4522.slice. Apr 30 03:29:17.225634 kubelet[2803]: E0430 03:29:17.225418 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-a5554f61da\" not found" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.227943 systemd[1]: Created slice kubepods-burstable-pod75433f5e2eae76115b6cd92e9690eaed.slice - libcontainer container kubepods-burstable-pod75433f5e2eae76115b6cd92e9690eaed.slice. Apr 30 03:29:17.229693 kubelet[2803]: E0430 03:29:17.229671 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-a5554f61da\" not found" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.244122 kubelet[2803]: I0430 03:29:17.244103 2803 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.244461 kubelet[2803]: E0430 03:29:17.244433 2803 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.47:6443/api/v1/nodes\": dial tcp 10.200.8.47:6443: connect: connection refused" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.271826 kubelet[2803]: I0430 03:29:17.271793 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f22980ed5b69c8692f843088521cce25-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-a5554f61da\" (UID: \"f22980ed5b69c8692f843088521cce25\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.272044 kubelet[2803]: I0430 03:29:17.271856 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7c8587db4e50708fba0c82140af4522-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-a5554f61da\" (UID: \"f7c8587db4e50708fba0c82140af4522\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.272044 kubelet[2803]: I0430 03:29:17.271893 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f22980ed5b69c8692f843088521cce25-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-a5554f61da\" (UID: \"f22980ed5b69c8692f843088521cce25\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.272044 kubelet[2803]: I0430 03:29:17.271922 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f22980ed5b69c8692f843088521cce25-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-a5554f61da\" (UID: \"f22980ed5b69c8692f843088521cce25\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.272044 kubelet[2803]: I0430 03:29:17.271952 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7c8587db4e50708fba0c82140af4522-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-a5554f61da\" (UID: \"f7c8587db4e50708fba0c82140af4522\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.272044 kubelet[2803]: I0430 03:29:17.271983 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f7c8587db4e50708fba0c82140af4522-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-a5554f61da\" (UID: \"f7c8587db4e50708fba0c82140af4522\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.272274 kubelet[2803]: I0430 03:29:17.272012 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7c8587db4e50708fba0c82140af4522-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-a5554f61da\" (UID: \"f7c8587db4e50708fba0c82140af4522\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.272274 kubelet[2803]: I0430 03:29:17.272040 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c8587db4e50708fba0c82140af4522-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-a5554f61da\" (UID: \"f7c8587db4e50708fba0c82140af4522\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.272274 kubelet[2803]: I0430 03:29:17.272066 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75433f5e2eae76115b6cd92e9690eaed-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-a5554f61da\" (UID: \"75433f5e2eae76115b6cd92e9690eaed\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.273169 kubelet[2803]: E0430 03:29:17.273128 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-a5554f61da?timeout=10s\": dial tcp 10.200.8.47:6443: connect: connection refused" interval="400ms" Apr 30 03:29:17.446964 kubelet[2803]: I0430 03:29:17.446821 2803 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.447560 kubelet[2803]: E0430 03:29:17.447332 2803 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.47:6443/api/v1/nodes\": dial tcp 10.200.8.47:6443: connect: connection refused" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.516875 containerd[1693]: time="2025-04-30T03:29:17.516828823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-a5554f61da,Uid:f22980ed5b69c8692f843088521cce25,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:17.527171 containerd[1693]: time="2025-04-30T03:29:17.527127757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-a5554f61da,Uid:f7c8587db4e50708fba0c82140af4522,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:17.530653 containerd[1693]: time="2025-04-30T03:29:17.530624202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-a5554f61da,Uid:75433f5e2eae76115b6cd92e9690eaed,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:17.674167 kubelet[2803]: E0430 03:29:17.674121 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-a5554f61da?timeout=10s\": dial tcp 10.200.8.47:6443: connect: connection refused" interval="800ms" Apr 30 03:29:17.850018 kubelet[2803]: I0430 03:29:17.849983 2803 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:17.850436 kubelet[2803]: E0430 03:29:17.850398 2803 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.47:6443/api/v1/nodes\": dial tcp 10.200.8.47:6443: connect: connection refused" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:18.038760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078413409.mount: Deactivated successfully. Apr 30 03:29:18.067027 containerd[1693]: time="2025-04-30T03:29:18.066976770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:18.069860 containerd[1693]: time="2025-04-30T03:29:18.069804907Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 30 03:29:18.073994 containerd[1693]: time="2025-04-30T03:29:18.073960861Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:18.077706 containerd[1693]: time="2025-04-30T03:29:18.077671809Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:18.081461 containerd[1693]: time="2025-04-30T03:29:18.081409857Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:29:18.087878 containerd[1693]: time="2025-04-30T03:29:18.087838341Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:18.090869 containerd[1693]: time="2025-04-30T03:29:18.090795179Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:29:18.093376 kubelet[2803]: W0430 03:29:18.093320 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-a5554f61da&limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused Apr 30 03:29:18.093667 kubelet[2803]: E0430 03:29:18.093399 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-a5554f61da&limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:18.095853 containerd[1693]: time="2025-04-30T03:29:18.095793744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:18.097054 containerd[1693]: time="2025-04-30T03:29:18.096540754Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.334196ms" Apr 30 03:29:18.098158 containerd[1693]: time="2025-04-30T03:29:18.098129475Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.214651ms" Apr 30 03:29:18.101483 containerd[1693]: time="2025-04-30T03:29:18.101303516Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.625213ms" Apr 30 03:29:18.263436 kubelet[2803]: W0430 03:29:18.263357 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused Apr 30 03:29:18.263436 kubelet[2803]: E0430 03:29:18.263446 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:18.475961 kubelet[2803]: E0430 03:29:18.475612 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-a5554f61da?timeout=10s\": dial tcp 10.200.8.47:6443: connect: connection refused" interval="1.6s" Apr 30 03:29:18.557174 kubelet[2803]: W0430 03:29:18.557064 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused Apr 30 03:29:18.557174 kubelet[2803]: E0430 03:29:18.557142 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:18.576686 kubelet[2803]: W0430 03:29:18.576610 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused Apr 30 03:29:18.576686 kubelet[2803]: E0430 03:29:18.576658 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:18.653538 kubelet[2803]: I0430 03:29:18.653504 2803 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:18.653916 kubelet[2803]: E0430 03:29:18.653880 2803 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.47:6443/api/v1/nodes\": dial tcp 10.200.8.47:6443: connect: connection refused" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:18.677289 containerd[1693]: time="2025-04-30T03:29:18.676983194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:18.677289 containerd[1693]: time="2025-04-30T03:29:18.677035695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:18.677289 containerd[1693]: time="2025-04-30T03:29:18.677051295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:18.677289 containerd[1693]: time="2025-04-30T03:29:18.677171297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:18.688862 containerd[1693]: time="2025-04-30T03:29:18.683281976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:18.688862 containerd[1693]: time="2025-04-30T03:29:18.687468331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:18.688862 containerd[1693]: time="2025-04-30T03:29:18.687491231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:18.688862 containerd[1693]: time="2025-04-30T03:29:18.687597832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:18.695208 containerd[1693]: time="2025-04-30T03:29:18.693521709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:18.695208 containerd[1693]: time="2025-04-30T03:29:18.693576510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:18.695208 containerd[1693]: time="2025-04-30T03:29:18.693605210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:18.695208 containerd[1693]: time="2025-04-30T03:29:18.693718912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:18.708586 systemd[1]: Started cri-containerd-3d7a35cac65f5fdafa851f43c2f2a6ca6b5f1b0a54f72b8743e8d79244ed08c4.scope - libcontainer container 3d7a35cac65f5fdafa851f43c2f2a6ca6b5f1b0a54f72b8743e8d79244ed08c4. Apr 30 03:29:18.716763 systemd[1]: Started cri-containerd-fb6a4bd30268a1c24244c097b414b56ced231d09a485b335ac671c6d812f33b9.scope - libcontainer container fb6a4bd30268a1c24244c097b414b56ced231d09a485b335ac671c6d812f33b9. Apr 30 03:29:18.722668 systemd[1]: Started cri-containerd-92021aa1ea7f3988a8419d35511d1abe867b7a0c30fce8bc20d93c49d02c752a.scope - libcontainer container 92021aa1ea7f3988a8419d35511d1abe867b7a0c30fce8bc20d93c49d02c752a. Apr 30 03:29:18.797831 containerd[1693]: time="2025-04-30T03:29:18.797724863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-a5554f61da,Uid:f7c8587db4e50708fba0c82140af4522,Namespace:kube-system,Attempt:0,} returns sandbox id \"92021aa1ea7f3988a8419d35511d1abe867b7a0c30fce8bc20d93c49d02c752a\"" Apr 30 03:29:18.798512 containerd[1693]: time="2025-04-30T03:29:18.798332871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-a5554f61da,Uid:f22980ed5b69c8692f843088521cce25,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d7a35cac65f5fdafa851f43c2f2a6ca6b5f1b0a54f72b8743e8d79244ed08c4\"" Apr 30 03:29:18.806668 containerd[1693]: time="2025-04-30T03:29:18.805939970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-a5554f61da,Uid:75433f5e2eae76115b6cd92e9690eaed,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb6a4bd30268a1c24244c097b414b56ced231d09a485b335ac671c6d812f33b9\"" Apr 30 03:29:18.807645 containerd[1693]: time="2025-04-30T03:29:18.807613291Z" level=info msg="CreateContainer within sandbox \"3d7a35cac65f5fdafa851f43c2f2a6ca6b5f1b0a54f72b8743e8d79244ed08c4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:29:18.808317 containerd[1693]: time="2025-04-30T03:29:18.808283200Z" level=info msg="CreateContainer within sandbox \"92021aa1ea7f3988a8419d35511d1abe867b7a0c30fce8bc20d93c49d02c752a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:29:18.813507 containerd[1693]: time="2025-04-30T03:29:18.813458967Z" level=info msg="CreateContainer within sandbox \"fb6a4bd30268a1c24244c097b414b56ced231d09a485b335ac671c6d812f33b9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:29:18.853594 containerd[1693]: time="2025-04-30T03:29:18.853551288Z" level=info msg="CreateContainer within sandbox \"3d7a35cac65f5fdafa851f43c2f2a6ca6b5f1b0a54f72b8743e8d79244ed08c4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fc600705ab618135ae99c95d9d17be860da7807cfc5cf3be382f4a0d309a5858\"" Apr 30 03:29:18.854524 containerd[1693]: time="2025-04-30T03:29:18.854390599Z" level=info msg="StartContainer for \"fc600705ab618135ae99c95d9d17be860da7807cfc5cf3be382f4a0d309a5858\"" Apr 30 03:29:18.867325 containerd[1693]: time="2025-04-30T03:29:18.867235466Z" level=info msg="CreateContainer within sandbox \"92021aa1ea7f3988a8419d35511d1abe867b7a0c30fce8bc20d93c49d02c752a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ac8ba6a80d8c4772f30c820a347d64cd5effc441767cd85f21bd408227be2a02\"" Apr 30 03:29:18.868534 containerd[1693]: time="2025-04-30T03:29:18.867730772Z" level=info msg="StartContainer for \"ac8ba6a80d8c4772f30c820a347d64cd5effc441767cd85f21bd408227be2a02\"" Apr 30 03:29:18.875754 containerd[1693]: time="2025-04-30T03:29:18.875725876Z" level=info msg="CreateContainer within sandbox \"fb6a4bd30268a1c24244c097b414b56ced231d09a485b335ac671c6d812f33b9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c1aa9577c5fca74f624ddb3a0b252af1746c3da3c77b0cdab3260557ae293707\"" Apr 30 03:29:18.876281 containerd[1693]: time="2025-04-30T03:29:18.876258483Z" level=info msg="StartContainer for \"c1aa9577c5fca74f624ddb3a0b252af1746c3da3c77b0cdab3260557ae293707\"" Apr 30 03:29:18.882742 systemd[1]: Started cri-containerd-fc600705ab618135ae99c95d9d17be860da7807cfc5cf3be382f4a0d309a5858.scope - libcontainer container fc600705ab618135ae99c95d9d17be860da7807cfc5cf3be382f4a0d309a5858. Apr 30 03:29:18.909796 systemd[1]: Started cri-containerd-ac8ba6a80d8c4772f30c820a347d64cd5effc441767cd85f21bd408227be2a02.scope - libcontainer container ac8ba6a80d8c4772f30c820a347d64cd5effc441767cd85f21bd408227be2a02. Apr 30 03:29:18.927530 systemd[1]: Started cri-containerd-c1aa9577c5fca74f624ddb3a0b252af1746c3da3c77b0cdab3260557ae293707.scope - libcontainer container c1aa9577c5fca74f624ddb3a0b252af1746c3da3c77b0cdab3260557ae293707. Apr 30 03:29:18.970860 containerd[1693]: time="2025-04-30T03:29:18.970821212Z" level=info msg="StartContainer for \"fc600705ab618135ae99c95d9d17be860da7807cfc5cf3be382f4a0d309a5858\" returns successfully" Apr 30 03:29:18.994773 containerd[1693]: time="2025-04-30T03:29:18.994669421Z" level=info msg="StartContainer for \"ac8ba6a80d8c4772f30c820a347d64cd5effc441767cd85f21bd408227be2a02\" returns successfully" Apr 30 03:29:19.076461 containerd[1693]: time="2025-04-30T03:29:19.076263781Z" level=info msg="StartContainer for \"c1aa9577c5fca74f624ddb3a0b252af1746c3da3c77b0cdab3260557ae293707\" returns successfully" Apr 30 03:29:19.110406 kubelet[2803]: E0430 03:29:19.109331 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-a5554f61da\" not found" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:19.110406 kubelet[2803]: E0430 03:29:19.109820 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-a5554f61da\" not found" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:19.115385 kubelet[2803]: E0430 03:29:19.114615 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-a5554f61da\" not found" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:20.117720 kubelet[2803]: E0430 03:29:20.117637 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-a5554f61da\" not found" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:20.119317 kubelet[2803]: E0430 03:29:20.119085 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-a5554f61da\" not found" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:20.256527 kubelet[2803]: I0430 03:29:20.256462 2803 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:20.846867 kubelet[2803]: E0430 03:29:20.846835 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-a5554f61da\" not found" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:20.984685 kubelet[2803]: E0430 03:29:20.984637 2803 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-a-a5554f61da\" not found" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:21.051587 kubelet[2803]: I0430 03:29:21.051355 2803 apiserver.go:52] "Watching apiserver" Apr 30 03:29:21.052324 kubelet[2803]: E0430 03:29:21.052176 2803 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.3-a-a5554f61da.183afaf9f70aa306 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-a5554f61da,UID:ci-4081.3.3-a-a5554f61da,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-a5554f61da,},FirstTimestamp:2025-04-30 03:29:17.057958662 +0000 UTC m=+1.107310484,LastTimestamp:2025-04-30 03:29:17.057958662 +0000 UTC m=+1.107310484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-a5554f61da,}" Apr 30 03:29:21.069967 kubelet[2803]: I0430 03:29:21.069939 2803 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:29:21.105904 kubelet[2803]: E0430 03:29:21.105661 2803 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.3-a-a5554f61da.183afaf9f7d0fdcf default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-a5554f61da,UID:ci-4081.3.3-a-a5554f61da,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-a5554f61da,},FirstTimestamp:2025-04-30 03:29:17.070958031 +0000 UTC m=+1.120309753,LastTimestamp:2025-04-30 03:29:17.070958031 +0000 UTC m=+1.120309753,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-a5554f61da,}" Apr 30 03:29:21.110877 kubelet[2803]: I0430 03:29:21.110850 2803 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:21.165827 kubelet[2803]: E0430 03:29:21.165626 2803 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.3-a-a5554f61da.183afaf9f9b073a3 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-a5554f61da,UID:ci-4081.3.3-a-a5554f61da,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081.3.3-a-a5554f61da status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-a5554f61da,},FirstTimestamp:2025-04-30 03:29:17.102379939 +0000 UTC m=+1.151731661,LastTimestamp:2025-04-30 03:29:17.102379939 +0000 UTC m=+1.151731661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-a5554f61da,}" Apr 30 03:29:21.169771 kubelet[2803]: I0430 03:29:21.169729 2803 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:21.178704 kubelet[2803]: E0430 03:29:21.178518 2803 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-a-a5554f61da\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:21.178704 kubelet[2803]: I0430 03:29:21.178545 2803 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:21.182620 kubelet[2803]: E0430 03:29:21.182461 2803 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-a-a5554f61da\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:21.182620 kubelet[2803]: I0430 03:29:21.182487 2803 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:21.184580 kubelet[2803]: E0430 03:29:21.184550 2803 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-a-a5554f61da\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:24.190860 systemd[1]: Reloading requested from client PID 3077 ('systemctl') (unit session-9.scope)... Apr 30 03:29:24.190875 systemd[1]: Reloading... Apr 30 03:29:24.291394 zram_generator::config[3117]: No configuration found. Apr 30 03:29:24.454591 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:29:24.554747 systemd[1]: Reloading finished in 363 ms. Apr 30 03:29:24.595044 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:24.609088 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:29:24.609302 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:24.613769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:24.718586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:24.732093 (kubelet)[3184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:29:24.772372 kubelet[3184]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:24.772372 kubelet[3184]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 03:29:24.772716 kubelet[3184]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:24.772716 kubelet[3184]: I0430 03:29:24.772471 3184 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:29:24.777749 kubelet[3184]: I0430 03:29:24.777718 3184 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 03:29:24.777749 kubelet[3184]: I0430 03:29:24.777740 3184 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:29:24.777999 kubelet[3184]: I0430 03:29:24.777979 3184 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 03:29:24.778981 kubelet[3184]: I0430 03:29:24.778958 3184 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:29:24.781689 kubelet[3184]: I0430 03:29:24.780910 3184 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:29:24.786863 kubelet[3184]: E0430 03:29:24.786823 3184 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:29:24.786863 kubelet[3184]: I0430 03:29:24.786850 3184 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:29:24.790753 kubelet[3184]: I0430 03:29:24.790726 3184 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:29:24.790958 kubelet[3184]: I0430 03:29:24.790918 3184 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:29:24.791123 kubelet[3184]: I0430 03:29:24.790950 3184 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-a5554f61da","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:29:24.791249 kubelet[3184]: I0430 03:29:24.791123 3184 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:29:24.791249 kubelet[3184]: I0430 03:29:24.791135 3184 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 03:29:24.791249 kubelet[3184]: I0430 03:29:24.791181 3184 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:24.791831 kubelet[3184]: I0430 03:29:24.791418 3184 kubelet.go:446] "Attempting to sync node with API server" Apr 30 03:29:24.791831 kubelet[3184]: I0430 03:29:24.791438 3184 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:29:24.791831 kubelet[3184]: I0430 03:29:24.791459 3184 kubelet.go:352] "Adding apiserver pod source" Apr 30 03:29:24.791831 kubelet[3184]: I0430 03:29:24.791470 3184 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:29:24.794524 kubelet[3184]: I0430 03:29:24.794483 3184 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:29:24.794924 kubelet[3184]: I0430 03:29:24.794904 3184 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:29:24.795836 kubelet[3184]: I0430 03:29:24.795817 3184 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 03:29:24.795943 kubelet[3184]: I0430 03:29:24.795855 3184 server.go:1287] "Started kubelet" Apr 30 03:29:24.797995 kubelet[3184]: I0430 03:29:24.797970 3184 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:29:24.805642 kubelet[3184]: I0430 03:29:24.805617 3184 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:29:24.806958 kubelet[3184]: I0430 03:29:24.806898 3184 server.go:490] "Adding debug handlers to kubelet server" Apr 30 03:29:24.808411 kubelet[3184]: I0430 03:29:24.808349 3184 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:29:24.808742 kubelet[3184]: I0430 03:29:24.808729 3184 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:29:24.809194 kubelet[3184]: I0430 03:29:24.809076 3184 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:29:24.812275 kubelet[3184]: I0430 03:29:24.812258 3184 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 03:29:24.812826 kubelet[3184]: E0430 03:29:24.812567 3184 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-a5554f61da\" not found" Apr 30 03:29:24.814320 kubelet[3184]: I0430 03:29:24.814304 3184 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:29:24.814587 kubelet[3184]: I0430 03:29:24.814571 3184 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:29:24.816647 kubelet[3184]: I0430 03:29:24.816609 3184 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:29:24.817861 kubelet[3184]: I0430 03:29:24.817844 3184 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:29:24.817970 kubelet[3184]: I0430 03:29:24.817961 3184 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 03:29:24.818037 kubelet[3184]: I0430 03:29:24.818028 3184 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 03:29:24.818309 kubelet[3184]: I0430 03:29:24.818086 3184 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 03:29:24.818309 kubelet[3184]: E0430 03:29:24.818135 3184 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:29:24.825000 kubelet[3184]: I0430 03:29:24.824889 3184 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:29:24.825704 kubelet[3184]: I0430 03:29:24.825683 3184 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:29:24.834617 kubelet[3184]: E0430 03:29:24.834173 3184 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:29:24.836581 kubelet[3184]: I0430 03:29:24.836564 3184 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:29:24.871124 kubelet[3184]: I0430 03:29:24.871108 3184 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 03:29:24.872186 kubelet[3184]: I0430 03:29:24.871266 3184 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 03:29:24.872186 kubelet[3184]: I0430 03:29:24.871291 3184 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:24.872186 kubelet[3184]: I0430 03:29:24.871462 3184 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:29:24.872186 kubelet[3184]: I0430 03:29:24.871476 3184 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:29:24.872186 kubelet[3184]: I0430 03:29:24.871491 3184 policy_none.go:49] "None policy: Start" Apr 30 03:29:24.872186 kubelet[3184]: I0430 03:29:24.871500 3184 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 03:29:24.872186 kubelet[3184]: I0430 03:29:24.871508 3184 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:29:24.872186 kubelet[3184]: I0430 03:29:24.871614 3184 state_mem.go:75] "Updated machine memory state" Apr 30 03:29:24.875802 kubelet[3184]: I0430 03:29:24.875784 3184 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:29:24.876292 kubelet[3184]: I0430 03:29:24.876277 3184 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:29:24.876446 kubelet[3184]: I0430 03:29:24.876416 3184 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:29:24.877089 kubelet[3184]: I0430 03:29:24.877074 3184 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:29:24.878844 kubelet[3184]: E0430 03:29:24.878824 3184 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 03:29:24.919328 kubelet[3184]: I0430 03:29:24.919305 3184 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:24.919707 kubelet[3184]: I0430 03:29:24.919529 3184 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:24.920018 kubelet[3184]: I0430 03:29:24.919650 3184 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:24.926405 kubelet[3184]: W0430 03:29:24.926388 3184 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:24.931256 kubelet[3184]: W0430 03:29:24.931064 3184 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:24.931344 kubelet[3184]: W0430 03:29:24.931273 3184 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:24.980696 kubelet[3184]: I0430 03:29:24.979483 3184 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:24.991708 kubelet[3184]: I0430 03:29:24.991683 3184 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:24.991870 kubelet[3184]: I0430 03:29:24.991766 3184 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.3-a-a5554f61da" Apr 30 03:29:25.014913 kubelet[3184]: I0430 03:29:25.014880 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f22980ed5b69c8692f843088521cce25-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-a5554f61da\" (UID: \"f22980ed5b69c8692f843088521cce25\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:25.014913 kubelet[3184]: I0430 03:29:25.014911 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7c8587db4e50708fba0c82140af4522-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-a5554f61da\" (UID: \"f7c8587db4e50708fba0c82140af4522\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:25.015048 kubelet[3184]: I0430 03:29:25.014935 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f7c8587db4e50708fba0c82140af4522-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-a5554f61da\" (UID: \"f7c8587db4e50708fba0c82140af4522\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:25.015048 kubelet[3184]: I0430 03:29:25.014954 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7c8587db4e50708fba0c82140af4522-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-a5554f61da\" (UID: \"f7c8587db4e50708fba0c82140af4522\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:25.015048 kubelet[3184]: I0430 03:29:25.014978 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c8587db4e50708fba0c82140af4522-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-a5554f61da\" (UID: \"f7c8587db4e50708fba0c82140af4522\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:25.015048 kubelet[3184]: I0430 03:29:25.015000 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75433f5e2eae76115b6cd92e9690eaed-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-a5554f61da\" (UID: \"75433f5e2eae76115b6cd92e9690eaed\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:25.015048 kubelet[3184]: I0430 03:29:25.015021 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f22980ed5b69c8692f843088521cce25-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-a5554f61da\" (UID: \"f22980ed5b69c8692f843088521cce25\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:25.015222 kubelet[3184]: I0430 03:29:25.015043 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f22980ed5b69c8692f843088521cce25-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-a5554f61da\" (UID: \"f22980ed5b69c8692f843088521cce25\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:25.015222 kubelet[3184]: I0430 03:29:25.015066 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7c8587db4e50708fba0c82140af4522-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-a5554f61da\" (UID: \"f7c8587db4e50708fba0c82140af4522\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:25.799952 kubelet[3184]: I0430 03:29:25.799907 3184 apiserver.go:52] "Watching apiserver" Apr 30 03:29:25.814804 kubelet[3184]: I0430 03:29:25.814770 3184 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:29:25.856841 kubelet[3184]: I0430 03:29:25.856505 3184 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:25.862085 kubelet[3184]: W0430 03:29:25.862065 3184 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:25.862359 kubelet[3184]: E0430 03:29:25.862117 3184 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-a-a5554f61da\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-a-a5554f61da" Apr 30 03:29:25.883440 kubelet[3184]: I0430 03:29:25.882697 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-a-a5554f61da" podStartSLOduration=1.882684055 podStartE2EDuration="1.882684055s" podCreationTimestamp="2025-04-30 03:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:25.882455654 +0000 UTC m=+1.145914108" watchObservedRunningTime="2025-04-30 03:29:25.882684055 +0000 UTC m=+1.146142409" Apr 30 03:29:25.903091 kubelet[3184]: I0430 03:29:25.902267 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-a-a5554f61da" podStartSLOduration=1.902251099 podStartE2EDuration="1.902251099s" podCreationTimestamp="2025-04-30 03:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:25.902096799 +0000 UTC m=+1.165555153" watchObservedRunningTime="2025-04-30 03:29:25.902251099 +0000 UTC m=+1.165709553" Apr 30 03:29:25.903091 kubelet[3184]: I0430 03:29:25.902379 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-a5554f61da" podStartSLOduration=1.9023539999999999 podStartE2EDuration="1.902354s" podCreationTimestamp="2025-04-30 03:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:25.892910278 +0000 UTC m=+1.156368732" watchObservedRunningTime="2025-04-30 03:29:25.902354 +0000 UTC m=+1.165812354" Apr 30 03:29:28.949641 kubelet[3184]: I0430 03:29:28.948632 3184 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:29:28.950151 containerd[1693]: time="2025-04-30T03:29:28.949565734Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:29:28.951033 kubelet[3184]: I0430 03:29:28.950610 3184 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:29:29.839199 systemd[1]: Created slice kubepods-besteffort-podf8636da1_59f0_4517_ba22_bcacab2171a3.slice - libcontainer container kubepods-besteffort-podf8636da1_59f0_4517_ba22_bcacab2171a3.slice. Apr 30 03:29:29.941399 kubelet[3184]: I0430 03:29:29.941324 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8636da1-59f0-4517-ba22-bcacab2171a3-kube-proxy\") pod \"kube-proxy-6m88b\" (UID: \"f8636da1-59f0-4517-ba22-bcacab2171a3\") " pod="kube-system/kube-proxy-6m88b" Apr 30 03:29:29.941399 kubelet[3184]: I0430 03:29:29.941394 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8636da1-59f0-4517-ba22-bcacab2171a3-xtables-lock\") pod \"kube-proxy-6m88b\" (UID: \"f8636da1-59f0-4517-ba22-bcacab2171a3\") " pod="kube-system/kube-proxy-6m88b" Apr 30 03:29:29.941701 kubelet[3184]: I0430 03:29:29.941420 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8636da1-59f0-4517-ba22-bcacab2171a3-lib-modules\") pod \"kube-proxy-6m88b\" (UID: \"f8636da1-59f0-4517-ba22-bcacab2171a3\") " pod="kube-system/kube-proxy-6m88b" Apr 30 03:29:29.941701 kubelet[3184]: I0430 03:29:29.941442 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbf5k\" (UniqueName: \"kubernetes.io/projected/f8636da1-59f0-4517-ba22-bcacab2171a3-kube-api-access-mbf5k\") pod \"kube-proxy-6m88b\" (UID: \"f8636da1-59f0-4517-ba22-bcacab2171a3\") " pod="kube-system/kube-proxy-6m88b" Apr 30 03:29:30.074058 kubelet[3184]: I0430 03:29:30.073887 3184 status_manager.go:890] "Failed to get status for pod" podUID="458e8215-2d8e-4de4-b35b-a8685a0a6e1a" pod="tigera-operator/tigera-operator-789496d6f5-9vhqx" err="pods \"tigera-operator-789496d6f5-9vhqx\" is forbidden: User \"system:node:ci-4081.3.3-a-a5554f61da\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.3-a-a5554f61da' and this object" Apr 30 03:29:30.074058 kubelet[3184]: W0430 03:29:30.073989 3184 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081.3.3-a-a5554f61da" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.3-a-a5554f61da' and this object Apr 30 03:29:30.074058 kubelet[3184]: E0430 03:29:30.074023 3184 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4081.3.3-a-a5554f61da\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.3-a-a5554f61da' and this object" logger="UnhandledError" Apr 30 03:29:30.075246 kubelet[3184]: W0430 03:29:30.074518 3184 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.3-a-a5554f61da" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.3-a-a5554f61da' and this object Apr 30 03:29:30.075246 kubelet[3184]: E0430 03:29:30.074549 3184 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.3-a-a5554f61da\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.3-a-a5554f61da' and this object" logger="UnhandledError" Apr 30 03:29:30.083896 systemd[1]: Created slice kubepods-besteffort-pod458e8215_2d8e_4de4_b35b_a8685a0a6e1a.slice - libcontainer container kubepods-besteffort-pod458e8215_2d8e_4de4_b35b_a8685a0a6e1a.slice. Apr 30 03:29:30.142592 kubelet[3184]: I0430 03:29:30.142434 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/458e8215-2d8e-4de4-b35b-a8685a0a6e1a-var-lib-calico\") pod \"tigera-operator-789496d6f5-9vhqx\" (UID: \"458e8215-2d8e-4de4-b35b-a8685a0a6e1a\") " pod="tigera-operator/tigera-operator-789496d6f5-9vhqx" Apr 30 03:29:30.142592 kubelet[3184]: I0430 03:29:30.142473 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8vdd\" (UniqueName: \"kubernetes.io/projected/458e8215-2d8e-4de4-b35b-a8685a0a6e1a-kube-api-access-k8vdd\") pod \"tigera-operator-789496d6f5-9vhqx\" (UID: \"458e8215-2d8e-4de4-b35b-a8685a0a6e1a\") " pod="tigera-operator/tigera-operator-789496d6f5-9vhqx" Apr 30 03:29:30.150112 containerd[1693]: time="2025-04-30T03:29:30.150070766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6m88b,Uid:f8636da1-59f0-4517-ba22-bcacab2171a3,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:30.190625 containerd[1693]: time="2025-04-30T03:29:30.190295958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:30.190625 containerd[1693]: time="2025-04-30T03:29:30.190353358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:30.190625 containerd[1693]: time="2025-04-30T03:29:30.190403358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:30.190625 containerd[1693]: time="2025-04-30T03:29:30.190531458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:30.214759 systemd[1]: Started cri-containerd-6df37500a3e8ad6964eea403b87669b7e53b444a09deee50ab3d575dd63f40ab.scope - libcontainer container 6df37500a3e8ad6964eea403b87669b7e53b444a09deee50ab3d575dd63f40ab. Apr 30 03:29:30.234542 containerd[1693]: time="2025-04-30T03:29:30.234499058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6m88b,Uid:f8636da1-59f0-4517-ba22-bcacab2171a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6df37500a3e8ad6964eea403b87669b7e53b444a09deee50ab3d575dd63f40ab\"" Apr 30 03:29:30.237827 containerd[1693]: time="2025-04-30T03:29:30.237794266Z" level=info msg="CreateContainer within sandbox \"6df37500a3e8ad6964eea403b87669b7e53b444a09deee50ab3d575dd63f40ab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:29:30.275127 containerd[1693]: time="2025-04-30T03:29:30.275056551Z" level=info msg="CreateContainer within sandbox \"6df37500a3e8ad6964eea403b87669b7e53b444a09deee50ab3d575dd63f40ab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e8975f94ef0371e4b07eec5bb18ffb40f1b9876c19fb8e9c2b8ff1e77e271e4\"" Apr 30 03:29:30.275604 containerd[1693]: time="2025-04-30T03:29:30.275541052Z" level=info msg="StartContainer for \"7e8975f94ef0371e4b07eec5bb18ffb40f1b9876c19fb8e9c2b8ff1e77e271e4\"" Apr 30 03:29:30.305704 systemd[1]: Started cri-containerd-7e8975f94ef0371e4b07eec5bb18ffb40f1b9876c19fb8e9c2b8ff1e77e271e4.scope - libcontainer container 7e8975f94ef0371e4b07eec5bb18ffb40f1b9876c19fb8e9c2b8ff1e77e271e4. Apr 30 03:29:30.332006 containerd[1693]: time="2025-04-30T03:29:30.331957380Z" level=info msg="StartContainer for \"7e8975f94ef0371e4b07eec5bb18ffb40f1b9876c19fb8e9c2b8ff1e77e271e4\" returns successfully" Apr 30 03:29:30.879350 sudo[2216]: pam_unix(sudo:session): session closed for user root Apr 30 03:29:30.885004 kubelet[3184]: I0430 03:29:30.884913 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6m88b" podStartSLOduration=1.884891638 podStartE2EDuration="1.884891638s" podCreationTimestamp="2025-04-30 03:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:30.884628338 +0000 UTC m=+6.148086792" watchObservedRunningTime="2025-04-30 03:29:30.884891638 +0000 UTC m=+6.148350092" Apr 30 03:29:30.981102 sshd[2213]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:30.984818 systemd[1]: sshd@6-10.200.8.47:22-10.200.16.10:40610.service: Deactivated successfully. Apr 30 03:29:30.987044 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:29:30.987258 systemd[1]: session-9.scope: Consumed 4.832s CPU time, 156.5M memory peak, 0B memory swap peak. Apr 30 03:29:30.988518 systemd-logind[1671]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:29:30.989781 systemd-logind[1671]: Removed session 9. Apr 30 03:29:31.249482 kubelet[3184]: E0430 03:29:31.249091 3184 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 03:29:31.249482 kubelet[3184]: E0430 03:29:31.249129 3184 projected.go:194] Error preparing data for projected volume kube-api-access-k8vdd for pod tigera-operator/tigera-operator-789496d6f5-9vhqx: failed to sync configmap cache: timed out waiting for the condition Apr 30 03:29:31.249482 kubelet[3184]: E0430 03:29:31.249203 3184 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/458e8215-2d8e-4de4-b35b-a8685a0a6e1a-kube-api-access-k8vdd podName:458e8215-2d8e-4de4-b35b-a8685a0a6e1a nodeName:}" failed. No retries permitted until 2025-04-30 03:29:31.749179967 +0000 UTC m=+7.012638321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k8vdd" (UniqueName: "kubernetes.io/projected/458e8215-2d8e-4de4-b35b-a8685a0a6e1a-kube-api-access-k8vdd") pod "tigera-operator-789496d6f5-9vhqx" (UID: "458e8215-2d8e-4de4-b35b-a8685a0a6e1a") : failed to sync configmap cache: timed out waiting for the condition Apr 30 03:29:31.888116 containerd[1693]: time="2025-04-30T03:29:31.888076721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-9vhqx,Uid:458e8215-2d8e-4de4-b35b-a8685a0a6e1a,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:29:31.927580 containerd[1693]: time="2025-04-30T03:29:31.927346611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:31.927580 containerd[1693]: time="2025-04-30T03:29:31.927428011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:31.927580 containerd[1693]: time="2025-04-30T03:29:31.927449511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:31.927906 containerd[1693]: time="2025-04-30T03:29:31.927689611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:31.952492 systemd[1]: Started cri-containerd-5b1608db09069a6274c58fa14e0e69026ab9c839ad2cefa0125ee0a1ae325663.scope - libcontainer container 5b1608db09069a6274c58fa14e0e69026ab9c839ad2cefa0125ee0a1ae325663. Apr 30 03:29:31.987577 containerd[1693]: time="2025-04-30T03:29:31.987510148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-9vhqx,Uid:458e8215-2d8e-4de4-b35b-a8685a0a6e1a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5b1608db09069a6274c58fa14e0e69026ab9c839ad2cefa0125ee0a1ae325663\"" Apr 30 03:29:31.989594 containerd[1693]: time="2025-04-30T03:29:31.989532252Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:29:37.647135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount834972928.mount: Deactivated successfully. Apr 30 03:29:38.356779 containerd[1693]: time="2025-04-30T03:29:38.356729537Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:38.359671 containerd[1693]: time="2025-04-30T03:29:38.359595448Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:29:38.362783 containerd[1693]: time="2025-04-30T03:29:38.362728861Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:38.366214 containerd[1693]: time="2025-04-30T03:29:38.366105474Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:38.366903 containerd[1693]: time="2025-04-30T03:29:38.366867176Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 6.377298424s" Apr 30 03:29:38.366987 containerd[1693]: time="2025-04-30T03:29:38.366907377Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:29:38.369621 containerd[1693]: time="2025-04-30T03:29:38.369484087Z" level=info msg="CreateContainer within sandbox \"5b1608db09069a6274c58fa14e0e69026ab9c839ad2cefa0125ee0a1ae325663\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:29:38.399560 containerd[1693]: time="2025-04-30T03:29:38.399529302Z" level=info msg="CreateContainer within sandbox \"5b1608db09069a6274c58fa14e0e69026ab9c839ad2cefa0125ee0a1ae325663\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"74becfcfb414bd5ee557d26bc84cea81ea249e41312385edd3c025967037c090\"" Apr 30 03:29:38.400007 containerd[1693]: time="2025-04-30T03:29:38.399903004Z" level=info msg="StartContainer for \"74becfcfb414bd5ee557d26bc84cea81ea249e41312385edd3c025967037c090\"" Apr 30 03:29:38.429524 systemd[1]: Started cri-containerd-74becfcfb414bd5ee557d26bc84cea81ea249e41312385edd3c025967037c090.scope - libcontainer container 74becfcfb414bd5ee557d26bc84cea81ea249e41312385edd3c025967037c090. Apr 30 03:29:38.453880 containerd[1693]: time="2025-04-30T03:29:38.453795211Z" level=info msg="StartContainer for \"74becfcfb414bd5ee557d26bc84cea81ea249e41312385edd3c025967037c090\" returns successfully" Apr 30 03:29:38.897041 kubelet[3184]: I0430 03:29:38.896978 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-9vhqx" podStartSLOduration=2.517621584 podStartE2EDuration="8.896960415s" podCreationTimestamp="2025-04-30 03:29:30 +0000 UTC" firstStartedPulling="2025-04-30 03:29:31.98862135 +0000 UTC m=+7.252079704" lastFinishedPulling="2025-04-30 03:29:38.367960181 +0000 UTC m=+13.631418535" observedRunningTime="2025-04-30 03:29:38.896650414 +0000 UTC m=+14.160108868" watchObservedRunningTime="2025-04-30 03:29:38.896960415 +0000 UTC m=+14.160418769" Apr 30 03:29:41.657097 systemd[1]: Created slice kubepods-besteffort-pod41bca7b6_a340_4ab8_91a5_68b800e86b07.slice - libcontainer container kubepods-besteffort-pod41bca7b6_a340_4ab8_91a5_68b800e86b07.slice. Apr 30 03:29:41.716027 kubelet[3184]: I0430 03:29:41.715055 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41bca7b6-a340-4ab8-91a5-68b800e86b07-typha-certs\") pod \"calico-typha-78b565b7cb-svsjp\" (UID: \"41bca7b6-a340-4ab8-91a5-68b800e86b07\") " pod="calico-system/calico-typha-78b565b7cb-svsjp" Apr 30 03:29:41.716027 kubelet[3184]: I0430 03:29:41.715107 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmbh9\" (UniqueName: \"kubernetes.io/projected/41bca7b6-a340-4ab8-91a5-68b800e86b07-kube-api-access-pmbh9\") pod \"calico-typha-78b565b7cb-svsjp\" (UID: \"41bca7b6-a340-4ab8-91a5-68b800e86b07\") " pod="calico-system/calico-typha-78b565b7cb-svsjp" Apr 30 03:29:41.716027 kubelet[3184]: I0430 03:29:41.715132 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bca7b6-a340-4ab8-91a5-68b800e86b07-tigera-ca-bundle\") pod \"calico-typha-78b565b7cb-svsjp\" (UID: \"41bca7b6-a340-4ab8-91a5-68b800e86b07\") " pod="calico-system/calico-typha-78b565b7cb-svsjp" Apr 30 03:29:41.764976 systemd[1]: Created slice kubepods-besteffort-pod20f5c843_fdd5_4e76_86e5_a92b9e42ef1d.slice - libcontainer container kubepods-besteffort-pod20f5c843_fdd5_4e76_86e5_a92b9e42ef1d.slice. Apr 30 03:29:41.816036 kubelet[3184]: I0430 03:29:41.815803 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/20f5c843-fdd5-4e76-86e5-a92b9e42ef1d-cni-net-dir\") pod \"calico-node-w4g96\" (UID: \"20f5c843-fdd5-4e76-86e5-a92b9e42ef1d\") " pod="calico-system/calico-node-w4g96" Apr 30 03:29:41.816761 kubelet[3184]: I0430 03:29:41.816503 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/20f5c843-fdd5-4e76-86e5-a92b9e42ef1d-cni-log-dir\") pod \"calico-node-w4g96\" (UID: \"20f5c843-fdd5-4e76-86e5-a92b9e42ef1d\") " pod="calico-system/calico-node-w4g96" Apr 30 03:29:41.816761 kubelet[3184]: I0430 03:29:41.816546 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20f5c843-fdd5-4e76-86e5-a92b9e42ef1d-tigera-ca-bundle\") pod \"calico-node-w4g96\" (UID: \"20f5c843-fdd5-4e76-86e5-a92b9e42ef1d\") " pod="calico-system/calico-node-w4g96" Apr 30 03:29:41.816761 kubelet[3184]: I0430 03:29:41.816568 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/20f5c843-fdd5-4e76-86e5-a92b9e42ef1d-node-certs\") pod \"calico-node-w4g96\" (UID: \"20f5c843-fdd5-4e76-86e5-a92b9e42ef1d\") " pod="calico-system/calico-node-w4g96" Apr 30 03:29:41.816761 kubelet[3184]: I0430 03:29:41.816590 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/20f5c843-fdd5-4e76-86e5-a92b9e42ef1d-var-run-calico\") pod \"calico-node-w4g96\" (UID: \"20f5c843-fdd5-4e76-86e5-a92b9e42ef1d\") " pod="calico-system/calico-node-w4g96" Apr 30 03:29:41.816761 kubelet[3184]: I0430 03:29:41.816612 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/20f5c843-fdd5-4e76-86e5-a92b9e42ef1d-cni-bin-dir\") pod \"calico-node-w4g96\" (UID: \"20f5c843-fdd5-4e76-86e5-a92b9e42ef1d\") " pod="calico-system/calico-node-w4g96" Apr 30 03:29:41.817008 kubelet[3184]: I0430 03:29:41.816638 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/20f5c843-fdd5-4e76-86e5-a92b9e42ef1d-flexvol-driver-host\") pod \"calico-node-w4g96\" (UID: \"20f5c843-fdd5-4e76-86e5-a92b9e42ef1d\") " pod="calico-system/calico-node-w4g96" Apr 30 03:29:41.817008 kubelet[3184]: I0430 03:29:41.816675 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/20f5c843-fdd5-4e76-86e5-a92b9e42ef1d-var-lib-calico\") pod \"calico-node-w4g96\" (UID: \"20f5c843-fdd5-4e76-86e5-a92b9e42ef1d\") " pod="calico-system/calico-node-w4g96" Apr 30 03:29:41.817008 kubelet[3184]: I0430 03:29:41.816700 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20f5c843-fdd5-4e76-86e5-a92b9e42ef1d-lib-modules\") pod \"calico-node-w4g96\" (UID: \"20f5c843-fdd5-4e76-86e5-a92b9e42ef1d\") " pod="calico-system/calico-node-w4g96" Apr 30 03:29:41.817008 kubelet[3184]: I0430 03:29:41.816723 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20f5c843-fdd5-4e76-86e5-a92b9e42ef1d-xtables-lock\") pod \"calico-node-w4g96\" (UID: \"20f5c843-fdd5-4e76-86e5-a92b9e42ef1d\") " pod="calico-system/calico-node-w4g96" Apr 30 03:29:41.817008 kubelet[3184]: I0430 03:29:41.816747 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/20f5c843-fdd5-4e76-86e5-a92b9e42ef1d-policysync\") pod \"calico-node-w4g96\" (UID: \"20f5c843-fdd5-4e76-86e5-a92b9e42ef1d\") " pod="calico-system/calico-node-w4g96" Apr 30 03:29:41.817209 kubelet[3184]: I0430 03:29:41.816773 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgsph\" (UniqueName: \"kubernetes.io/projected/20f5c843-fdd5-4e76-86e5-a92b9e42ef1d-kube-api-access-qgsph\") pod \"calico-node-w4g96\" (UID: \"20f5c843-fdd5-4e76-86e5-a92b9e42ef1d\") " pod="calico-system/calico-node-w4g96" Apr 30 03:29:41.878185 kubelet[3184]: E0430 03:29:41.878139 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqthf" podUID="79a6da92-25f7-40b3-a880-7f6f766b31fd" Apr 30 03:29:41.917269 kubelet[3184]: I0430 03:29:41.917139 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/79a6da92-25f7-40b3-a880-7f6f766b31fd-varrun\") pod \"csi-node-driver-xqthf\" (UID: \"79a6da92-25f7-40b3-a880-7f6f766b31fd\") " pod="calico-system/csi-node-driver-xqthf" Apr 30 03:29:41.917269 kubelet[3184]: I0430 03:29:41.917188 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hrdb\" (UniqueName: \"kubernetes.io/projected/79a6da92-25f7-40b3-a880-7f6f766b31fd-kube-api-access-5hrdb\") pod \"csi-node-driver-xqthf\" (UID: \"79a6da92-25f7-40b3-a880-7f6f766b31fd\") " pod="calico-system/csi-node-driver-xqthf" Apr 30 03:29:41.917269 kubelet[3184]: I0430 03:29:41.917224 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79a6da92-25f7-40b3-a880-7f6f766b31fd-kubelet-dir\") pod \"csi-node-driver-xqthf\" (UID: \"79a6da92-25f7-40b3-a880-7f6f766b31fd\") " pod="calico-system/csi-node-driver-xqthf" Apr 30 03:29:41.917269 kubelet[3184]: I0430 03:29:41.917271 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/79a6da92-25f7-40b3-a880-7f6f766b31fd-registration-dir\") pod \"csi-node-driver-xqthf\" (UID: \"79a6da92-25f7-40b3-a880-7f6f766b31fd\") " pod="calico-system/csi-node-driver-xqthf" Apr 30 03:29:41.917554 kubelet[3184]: I0430 03:29:41.917316 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/79a6da92-25f7-40b3-a880-7f6f766b31fd-socket-dir\") pod \"csi-node-driver-xqthf\" (UID: \"79a6da92-25f7-40b3-a880-7f6f766b31fd\") " pod="calico-system/csi-node-driver-xqthf" Apr 30 03:29:41.920395 kubelet[3184]: E0430 03:29:41.919780 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.920395 kubelet[3184]: W0430 03:29:41.919907 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.920395 kubelet[3184]: E0430 03:29:41.919928 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.921641 kubelet[3184]: E0430 03:29:41.921622 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.921641 kubelet[3184]: W0430 03:29:41.921640 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.922484 kubelet[3184]: E0430 03:29:41.921658 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.922484 kubelet[3184]: E0430 03:29:41.922116 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.922484 kubelet[3184]: W0430 03:29:41.922131 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.922484 kubelet[3184]: E0430 03:29:41.922146 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.924438 kubelet[3184]: E0430 03:29:41.924417 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.924438 kubelet[3184]: W0430 03:29:41.924437 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.925319 kubelet[3184]: E0430 03:29:41.924458 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.925319 kubelet[3184]: E0430 03:29:41.924785 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.925319 kubelet[3184]: W0430 03:29:41.924797 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.925319 kubelet[3184]: E0430 03:29:41.924815 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.926402 kubelet[3184]: E0430 03:29:41.926382 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.926402 kubelet[3184]: W0430 03:29:41.926401 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.926657 kubelet[3184]: E0430 03:29:41.926586 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.930408 kubelet[3184]: E0430 03:29:41.927471 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.930408 kubelet[3184]: W0430 03:29:41.927489 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.930408 kubelet[3184]: E0430 03:29:41.927664 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.930408 kubelet[3184]: E0430 03:29:41.928124 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.930408 kubelet[3184]: W0430 03:29:41.928166 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.930408 kubelet[3184]: E0430 03:29:41.928484 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.930408 kubelet[3184]: E0430 03:29:41.928823 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.930408 kubelet[3184]: W0430 03:29:41.928837 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.930408 kubelet[3184]: E0430 03:29:41.929006 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.930408 kubelet[3184]: E0430 03:29:41.929404 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.930897 kubelet[3184]: W0430 03:29:41.929418 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.930897 kubelet[3184]: E0430 03:29:41.929734 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.930897 kubelet[3184]: E0430 03:29:41.929977 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.930897 kubelet[3184]: W0430 03:29:41.929989 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.930897 kubelet[3184]: E0430 03:29:41.930167 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.930897 kubelet[3184]: E0430 03:29:41.930535 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.930897 kubelet[3184]: W0430 03:29:41.930548 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.931174 kubelet[3184]: E0430 03:29:41.930930 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.931174 kubelet[3184]: E0430 03:29:41.931082 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.931174 kubelet[3184]: W0430 03:29:41.931093 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.931523 kubelet[3184]: E0430 03:29:41.931176 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.931523 kubelet[3184]: E0430 03:29:41.931398 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.931523 kubelet[3184]: W0430 03:29:41.931412 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.937509 kubelet[3184]: E0430 03:29:41.931527 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.937509 kubelet[3184]: E0430 03:29:41.932201 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.937509 kubelet[3184]: W0430 03:29:41.932213 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.937509 kubelet[3184]: E0430 03:29:41.932514 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.937509 kubelet[3184]: E0430 03:29:41.933135 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.937509 kubelet[3184]: W0430 03:29:41.933159 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.937509 kubelet[3184]: E0430 03:29:41.933245 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.937509 kubelet[3184]: E0430 03:29:41.933826 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.937509 kubelet[3184]: W0430 03:29:41.933838 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.937509 kubelet[3184]: E0430 03:29:41.934023 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.937923 kubelet[3184]: E0430 03:29:41.934281 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.937923 kubelet[3184]: W0430 03:29:41.934292 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.937923 kubelet[3184]: E0430 03:29:41.934395 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.937923 kubelet[3184]: E0430 03:29:41.934714 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.937923 kubelet[3184]: W0430 03:29:41.934726 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.937923 kubelet[3184]: E0430 03:29:41.934916 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.937923 kubelet[3184]: E0430 03:29:41.935143 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.937923 kubelet[3184]: W0430 03:29:41.935158 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.937923 kubelet[3184]: E0430 03:29:41.935334 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.937923 kubelet[3184]: E0430 03:29:41.935653 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.938280 kubelet[3184]: W0430 03:29:41.935665 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.938280 kubelet[3184]: E0430 03:29:41.936180 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.938280 kubelet[3184]: E0430 03:29:41.936503 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.938280 kubelet[3184]: W0430 03:29:41.936530 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.938280 kubelet[3184]: E0430 03:29:41.936721 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.938280 kubelet[3184]: E0430 03:29:41.936792 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.938280 kubelet[3184]: W0430 03:29:41.936800 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.938280 kubelet[3184]: E0430 03:29:41.936818 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.938280 kubelet[3184]: E0430 03:29:41.937548 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.938280 kubelet[3184]: W0430 03:29:41.937560 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.938906 kubelet[3184]: E0430 03:29:41.937603 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.938906 kubelet[3184]: E0430 03:29:41.937885 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.938906 kubelet[3184]: W0430 03:29:41.937897 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.938906 kubelet[3184]: E0430 03:29:41.937910 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.958414 kubelet[3184]: E0430 03:29:41.957431 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.958414 kubelet[3184]: W0430 03:29:41.957452 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.958414 kubelet[3184]: E0430 03:29:41.957472 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.965232 containerd[1693]: time="2025-04-30T03:29:41.965188144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78b565b7cb-svsjp,Uid:41bca7b6-a340-4ab8-91a5-68b800e86b07,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:42.017423 containerd[1693]: time="2025-04-30T03:29:42.017289903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:42.018061 containerd[1693]: time="2025-04-30T03:29:42.017471904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:42.018530 kubelet[3184]: E0430 03:29:42.018493 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.018530 kubelet[3184]: W0430 03:29:42.018525 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.018695 kubelet[3184]: E0430 03:29:42.018551 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.019344 kubelet[3184]: E0430 03:29:42.019019 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.019344 kubelet[3184]: W0430 03:29:42.019037 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.019344 kubelet[3184]: E0430 03:29:42.019065 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.019611 containerd[1693]: time="2025-04-30T03:29:42.018940108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:42.020582 kubelet[3184]: E0430 03:29:42.020515 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.020582 kubelet[3184]: W0430 03:29:42.020533 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.020582 kubelet[3184]: E0430 03:29:42.020560 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.021243 containerd[1693]: time="2025-04-30T03:29:42.020909314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:42.021798 kubelet[3184]: E0430 03:29:42.021664 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.021798 kubelet[3184]: W0430 03:29:42.021682 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.021798 kubelet[3184]: E0430 03:29:42.021713 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.022187 kubelet[3184]: E0430 03:29:42.021969 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.022187 kubelet[3184]: W0430 03:29:42.021982 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.022187 kubelet[3184]: E0430 03:29:42.022066 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.022317 kubelet[3184]: E0430 03:29:42.022234 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.022317 kubelet[3184]: W0430 03:29:42.022245 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.022439 kubelet[3184]: E0430 03:29:42.022328 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.023347 kubelet[3184]: E0430 03:29:42.022525 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.023347 kubelet[3184]: W0430 03:29:42.022539 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.023347 kubelet[3184]: E0430 03:29:42.022728 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.023347 kubelet[3184]: E0430 03:29:42.023115 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.023347 kubelet[3184]: W0430 03:29:42.023129 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.023347 kubelet[3184]: E0430 03:29:42.023299 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.023984 kubelet[3184]: E0430 03:29:42.023659 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.023984 kubelet[3184]: W0430 03:29:42.023675 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.023984 kubelet[3184]: E0430 03:29:42.023939 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.024428 kubelet[3184]: E0430 03:29:42.024194 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.024428 kubelet[3184]: W0430 03:29:42.024207 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.024428 kubelet[3184]: E0430 03:29:42.024401 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.025556 kubelet[3184]: E0430 03:29:42.024782 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.025556 kubelet[3184]: W0430 03:29:42.024799 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.025556 kubelet[3184]: E0430 03:29:42.025192 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.026384 kubelet[3184]: E0430 03:29:42.025812 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.026384 kubelet[3184]: W0430 03:29:42.025829 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.026384 kubelet[3184]: E0430 03:29:42.026077 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.026384 kubelet[3184]: E0430 03:29:42.026233 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.026384 kubelet[3184]: W0430 03:29:42.026245 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.026384 kubelet[3184]: E0430 03:29:42.026347 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.026681 kubelet[3184]: E0430 03:29:42.026599 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.026681 kubelet[3184]: W0430 03:29:42.026626 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.026770 kubelet[3184]: E0430 03:29:42.026707 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.027832 kubelet[3184]: E0430 03:29:42.026905 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.027832 kubelet[3184]: W0430 03:29:42.026919 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.027832 kubelet[3184]: E0430 03:29:42.027016 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.027832 kubelet[3184]: E0430 03:29:42.027189 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.027832 kubelet[3184]: W0430 03:29:42.027199 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.027832 kubelet[3184]: E0430 03:29:42.027252 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.027832 kubelet[3184]: E0430 03:29:42.027530 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.027832 kubelet[3184]: W0430 03:29:42.027542 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.027832 kubelet[3184]: E0430 03:29:42.027654 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.027832 kubelet[3184]: E0430 03:29:42.027827 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.028243 kubelet[3184]: W0430 03:29:42.027839 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.029851 kubelet[3184]: E0430 03:29:42.028409 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.029851 kubelet[3184]: E0430 03:29:42.029553 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.029851 kubelet[3184]: W0430 03:29:42.029569 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.029851 kubelet[3184]: E0430 03:29:42.029595 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.031215 kubelet[3184]: E0430 03:29:42.030766 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.031215 kubelet[3184]: W0430 03:29:42.030969 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.031215 kubelet[3184]: E0430 03:29:42.030996 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.033040 kubelet[3184]: E0430 03:29:42.032253 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.033040 kubelet[3184]: W0430 03:29:42.032270 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.033040 kubelet[3184]: E0430 03:29:42.032300 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.033040 kubelet[3184]: E0430 03:29:42.032776 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.033040 kubelet[3184]: W0430 03:29:42.032791 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.033040 kubelet[3184]: E0430 03:29:42.032818 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.034970 kubelet[3184]: E0430 03:29:42.034558 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.034970 kubelet[3184]: W0430 03:29:42.034574 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.034970 kubelet[3184]: E0430 03:29:42.034614 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.035698 kubelet[3184]: E0430 03:29:42.035531 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.036146 kubelet[3184]: W0430 03:29:42.036052 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.036146 kubelet[3184]: E0430 03:29:42.036091 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.038386 kubelet[3184]: E0430 03:29:42.037700 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.038386 kubelet[3184]: W0430 03:29:42.037717 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.038386 kubelet[3184]: E0430 03:29:42.037732 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.050607 systemd[1]: Started cri-containerd-c2bac8288affed7926956201f087ffe0c15c1d266a56051d9eed9982bf5bc1da.scope - libcontainer container c2bac8288affed7926956201f087ffe0c15c1d266a56051d9eed9982bf5bc1da. Apr 30 03:29:42.060592 kubelet[3184]: E0430 03:29:42.060336 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:42.060592 kubelet[3184]: W0430 03:29:42.060353 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:42.060592 kubelet[3184]: E0430 03:29:42.060398 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:42.071478 containerd[1693]: time="2025-04-30T03:29:42.070119764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w4g96,Uid:20f5c843-fdd5-4e76-86e5-a92b9e42ef1d,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:42.120464 containerd[1693]: time="2025-04-30T03:29:42.120381117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:42.122518 containerd[1693]: time="2025-04-30T03:29:42.122459323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:42.122658 containerd[1693]: time="2025-04-30T03:29:42.122557224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:42.122745 containerd[1693]: time="2025-04-30T03:29:42.122705824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:42.123013 containerd[1693]: time="2025-04-30T03:29:42.122982725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78b565b7cb-svsjp,Uid:41bca7b6-a340-4ab8-91a5-68b800e86b07,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2bac8288affed7926956201f087ffe0c15c1d266a56051d9eed9982bf5bc1da\"" Apr 30 03:29:42.125829 containerd[1693]: time="2025-04-30T03:29:42.125437433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:29:42.143548 systemd[1]: Started cri-containerd-14497a0495bf99e0ccfa0f3f4faf478737e8e9e97c6062d18b5c60474bf99b9e.scope - libcontainer container 14497a0495bf99e0ccfa0f3f4faf478737e8e9e97c6062d18b5c60474bf99b9e. Apr 30 03:29:42.170581 containerd[1693]: time="2025-04-30T03:29:42.169201266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w4g96,Uid:20f5c843-fdd5-4e76-86e5-a92b9e42ef1d,Namespace:calico-system,Attempt:0,} returns sandbox id \"14497a0495bf99e0ccfa0f3f4faf478737e8e9e97c6062d18b5c60474bf99b9e\"" Apr 30 03:29:43.819120 kubelet[3184]: E0430 03:29:43.818559 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqthf" podUID="79a6da92-25f7-40b3-a880-7f6f766b31fd" Apr 30 03:29:44.182417 containerd[1693]: time="2025-04-30T03:29:44.182281699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:44.185252 containerd[1693]: time="2025-04-30T03:29:44.185094808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:29:44.189293 containerd[1693]: time="2025-04-30T03:29:44.189214820Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:44.193859 containerd[1693]: time="2025-04-30T03:29:44.193734134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:44.195214 containerd[1693]: time="2025-04-30T03:29:44.194813337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.069340304s" Apr 30 03:29:44.195214 containerd[1693]: time="2025-04-30T03:29:44.194856438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:29:44.195985 containerd[1693]: time="2025-04-30T03:29:44.195951841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:29:44.210435 containerd[1693]: time="2025-04-30T03:29:44.210271785Z" level=info msg="CreateContainer within sandbox \"c2bac8288affed7926956201f087ffe0c15c1d266a56051d9eed9982bf5bc1da\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:29:44.252274 containerd[1693]: time="2025-04-30T03:29:44.252215712Z" level=info msg="CreateContainer within sandbox \"c2bac8288affed7926956201f087ffe0c15c1d266a56051d9eed9982bf5bc1da\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3debe9007fb4a97c52b1b9c7c2e83c114cd6e72e213eb9a760aad5de5a6b70a5\"" Apr 30 03:29:44.252842 containerd[1693]: time="2025-04-30T03:29:44.252810614Z" level=info msg="StartContainer for \"3debe9007fb4a97c52b1b9c7c2e83c114cd6e72e213eb9a760aad5de5a6b70a5\"" Apr 30 03:29:44.281504 systemd[1]: Started cri-containerd-3debe9007fb4a97c52b1b9c7c2e83c114cd6e72e213eb9a760aad5de5a6b70a5.scope - libcontainer container 3debe9007fb4a97c52b1b9c7c2e83c114cd6e72e213eb9a760aad5de5a6b70a5. Apr 30 03:29:44.325241 containerd[1693]: time="2025-04-30T03:29:44.325116334Z" level=info msg="StartContainer for \"3debe9007fb4a97c52b1b9c7c2e83c114cd6e72e213eb9a760aad5de5a6b70a5\" returns successfully" Apr 30 03:29:44.925088 kubelet[3184]: I0430 03:29:44.924985 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78b565b7cb-svsjp" podStartSLOduration=1.854122753 podStartE2EDuration="3.924963762s" podCreationTimestamp="2025-04-30 03:29:41 +0000 UTC" firstStartedPulling="2025-04-30 03:29:42.124870231 +0000 UTC m=+17.388328685" lastFinishedPulling="2025-04-30 03:29:44.19571134 +0000 UTC m=+19.459169694" observedRunningTime="2025-04-30 03:29:44.923971859 +0000 UTC m=+20.187430313" watchObservedRunningTime="2025-04-30 03:29:44.924963762 +0000 UTC m=+20.188422216" Apr 30 03:29:44.930970 kubelet[3184]: E0430 03:29:44.930945 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.930970 kubelet[3184]: W0430 03:29:44.930966 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.931213 kubelet[3184]: E0430 03:29:44.930986 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.931213 kubelet[3184]: E0430 03:29:44.931193 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.931213 kubelet[3184]: W0430 03:29:44.931204 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.931444 kubelet[3184]: E0430 03:29:44.931217 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.931500 kubelet[3184]: E0430 03:29:44.931476 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.931543 kubelet[3184]: W0430 03:29:44.931503 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.931543 kubelet[3184]: E0430 03:29:44.931519 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.931783 kubelet[3184]: E0430 03:29:44.931764 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.931783 kubelet[3184]: W0430 03:29:44.931778 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.931928 kubelet[3184]: E0430 03:29:44.931792 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.932031 kubelet[3184]: E0430 03:29:44.932005 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.932031 kubelet[3184]: W0430 03:29:44.932019 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.932152 kubelet[3184]: E0430 03:29:44.932032 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.932223 kubelet[3184]: E0430 03:29:44.932212 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.932274 kubelet[3184]: W0430 03:29:44.932224 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.932274 kubelet[3184]: E0430 03:29:44.932237 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.932485 kubelet[3184]: E0430 03:29:44.932469 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.932485 kubelet[3184]: W0430 03:29:44.932482 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.932605 kubelet[3184]: E0430 03:29:44.932496 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.932731 kubelet[3184]: E0430 03:29:44.932709 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.932731 kubelet[3184]: W0430 03:29:44.932725 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.932842 kubelet[3184]: E0430 03:29:44.932738 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.932976 kubelet[3184]: E0430 03:29:44.932959 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.932976 kubelet[3184]: W0430 03:29:44.932972 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.933123 kubelet[3184]: E0430 03:29:44.932985 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.933187 kubelet[3184]: E0430 03:29:44.933176 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.933187 kubelet[3184]: W0430 03:29:44.933185 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.933272 kubelet[3184]: E0430 03:29:44.933198 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.933523 kubelet[3184]: E0430 03:29:44.933462 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.933523 kubelet[3184]: W0430 03:29:44.933475 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.933523 kubelet[3184]: E0430 03:29:44.933504 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.933948 kubelet[3184]: E0430 03:29:44.933757 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.933948 kubelet[3184]: W0430 03:29:44.933769 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.933948 kubelet[3184]: E0430 03:29:44.933782 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.934096 kubelet[3184]: E0430 03:29:44.934026 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.934096 kubelet[3184]: W0430 03:29:44.934056 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.934096 kubelet[3184]: E0430 03:29:44.934069 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.934324 kubelet[3184]: E0430 03:29:44.934305 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.934324 kubelet[3184]: W0430 03:29:44.934322 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.934493 kubelet[3184]: E0430 03:29:44.934336 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.934566 kubelet[3184]: E0430 03:29:44.934550 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.934566 kubelet[3184]: W0430 03:29:44.934562 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.934661 kubelet[3184]: E0430 03:29:44.934575 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.944006 kubelet[3184]: E0430 03:29:44.943969 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.944006 kubelet[3184]: W0430 03:29:44.943987 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.944006 kubelet[3184]: E0430 03:29:44.944002 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.944764 kubelet[3184]: E0430 03:29:44.944223 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.944764 kubelet[3184]: W0430 03:29:44.944235 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.944764 kubelet[3184]: E0430 03:29:44.944248 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.944764 kubelet[3184]: E0430 03:29:44.944503 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.944764 kubelet[3184]: W0430 03:29:44.944515 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.944764 kubelet[3184]: E0430 03:29:44.944528 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.945273 kubelet[3184]: E0430 03:29:44.945257 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.946445 kubelet[3184]: W0430 03:29:44.945375 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.946445 kubelet[3184]: E0430 03:29:44.945418 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.946445 kubelet[3184]: E0430 03:29:44.945657 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.946445 kubelet[3184]: W0430 03:29:44.945669 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.946445 kubelet[3184]: E0430 03:29:44.945696 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.946445 kubelet[3184]: E0430 03:29:44.945914 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.946445 kubelet[3184]: W0430 03:29:44.945926 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.946445 kubelet[3184]: E0430 03:29:44.946050 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.946445 kubelet[3184]: E0430 03:29:44.946190 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.946445 kubelet[3184]: W0430 03:29:44.946201 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.946927 kubelet[3184]: E0430 03:29:44.946288 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.946927 kubelet[3184]: E0430 03:29:44.946478 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.946927 kubelet[3184]: W0430 03:29:44.946489 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.946927 kubelet[3184]: E0430 03:29:44.946661 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.946927 kubelet[3184]: E0430 03:29:44.946880 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.946927 kubelet[3184]: W0430 03:29:44.946891 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.946927 kubelet[3184]: E0430 03:29:44.946908 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.947342 kubelet[3184]: E0430 03:29:44.947315 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.947342 kubelet[3184]: W0430 03:29:44.947333 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.947543 kubelet[3184]: E0430 03:29:44.947356 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.947638 kubelet[3184]: E0430 03:29:44.947601 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.947638 kubelet[3184]: W0430 03:29:44.947611 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.947737 kubelet[3184]: E0430 03:29:44.947642 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.948101 kubelet[3184]: E0430 03:29:44.947850 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.948101 kubelet[3184]: W0430 03:29:44.947864 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.948101 kubelet[3184]: E0430 03:29:44.947950 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.948513 kubelet[3184]: E0430 03:29:44.948312 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.948513 kubelet[3184]: W0430 03:29:44.948324 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.948513 kubelet[3184]: E0430 03:29:44.948461 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.948658 kubelet[3184]: E0430 03:29:44.948640 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.948658 kubelet[3184]: W0430 03:29:44.948650 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.948739 kubelet[3184]: E0430 03:29:44.948680 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.949185 kubelet[3184]: E0430 03:29:44.949139 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.949185 kubelet[3184]: W0430 03:29:44.949152 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.949185 kubelet[3184]: E0430 03:29:44.949167 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.949439 kubelet[3184]: E0430 03:29:44.949415 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.949439 kubelet[3184]: W0430 03:29:44.949427 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.949439 kubelet[3184]: E0430 03:29:44.949447 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.949876 kubelet[3184]: E0430 03:29:44.949853 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.949876 kubelet[3184]: W0430 03:29:44.949869 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.950031 kubelet[3184]: E0430 03:29:44.949888 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:44.950126 kubelet[3184]: E0430 03:29:44.950106 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:44.950126 kubelet[3184]: W0430 03:29:44.950122 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:44.950198 kubelet[3184]: E0430 03:29:44.950135 3184 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:45.448551 containerd[1693]: time="2025-04-30T03:29:45.448220356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:45.450346 containerd[1693]: time="2025-04-30T03:29:45.450197362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:29:45.454989 containerd[1693]: time="2025-04-30T03:29:45.454938377Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:45.459189 containerd[1693]: time="2025-04-30T03:29:45.459137589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:45.459863 containerd[1693]: time="2025-04-30T03:29:45.459727591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.26373005s" Apr 30 03:29:45.459863 containerd[1693]: time="2025-04-30T03:29:45.459768691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:29:45.462293 containerd[1693]: time="2025-04-30T03:29:45.462249899Z" level=info msg="CreateContainer within sandbox \"14497a0495bf99e0ccfa0f3f4faf478737e8e9e97c6062d18b5c60474bf99b9e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:29:45.498174 containerd[1693]: time="2025-04-30T03:29:45.498139408Z" level=info msg="CreateContainer within sandbox \"14497a0495bf99e0ccfa0f3f4faf478737e8e9e97c6062d18b5c60474bf99b9e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e348010a37921583d869aa1e8d908850c90d6ef7e422cddf6ba418829b6f00b8\"" Apr 30 03:29:45.498746 containerd[1693]: time="2025-04-30T03:29:45.498665310Z" level=info msg="StartContainer for \"e348010a37921583d869aa1e8d908850c90d6ef7e422cddf6ba418829b6f00b8\"" Apr 30 03:29:45.539171 systemd[1]: run-containerd-runc-k8s.io-e348010a37921583d869aa1e8d908850c90d6ef7e422cddf6ba418829b6f00b8-runc.ofpV9s.mount: Deactivated successfully. Apr 30 03:29:45.549530 systemd[1]: Started cri-containerd-e348010a37921583d869aa1e8d908850c90d6ef7e422cddf6ba418829b6f00b8.scope - libcontainer container e348010a37921583d869aa1e8d908850c90d6ef7e422cddf6ba418829b6f00b8. Apr 30 03:29:45.576948 containerd[1693]: time="2025-04-30T03:29:45.576321347Z" level=info msg="StartContainer for \"e348010a37921583d869aa1e8d908850c90d6ef7e422cddf6ba418829b6f00b8\" returns successfully" Apr 30 03:29:45.586082 systemd[1]: cri-containerd-e348010a37921583d869aa1e8d908850c90d6ef7e422cddf6ba418829b6f00b8.scope: Deactivated successfully. Apr 30 03:29:45.611990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e348010a37921583d869aa1e8d908850c90d6ef7e422cddf6ba418829b6f00b8-rootfs.mount: Deactivated successfully. Apr 30 03:29:45.819569 kubelet[3184]: E0430 03:29:45.819527 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqthf" podUID="79a6da92-25f7-40b3-a880-7f6f766b31fd" Apr 30 03:29:45.906555 kubelet[3184]: I0430 03:29:45.906529 3184 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:46.987717 containerd[1693]: time="2025-04-30T03:29:46.987648446Z" level=info msg="shim disconnected" id=e348010a37921583d869aa1e8d908850c90d6ef7e422cddf6ba418829b6f00b8 namespace=k8s.io Apr 30 03:29:46.987717 containerd[1693]: time="2025-04-30T03:29:46.987710547Z" level=warning msg="cleaning up after shim disconnected" id=e348010a37921583d869aa1e8d908850c90d6ef7e422cddf6ba418829b6f00b8 namespace=k8s.io Apr 30 03:29:46.987717 containerd[1693]: time="2025-04-30T03:29:46.987721347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:47.818921 kubelet[3184]: E0430 03:29:47.818842 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqthf" podUID="79a6da92-25f7-40b3-a880-7f6f766b31fd" Apr 30 03:29:47.912614 containerd[1693]: time="2025-04-30T03:29:47.912560064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:29:49.818708 kubelet[3184]: E0430 03:29:49.818658 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqthf" podUID="79a6da92-25f7-40b3-a880-7f6f766b31fd" Apr 30 03:29:51.818979 kubelet[3184]: E0430 03:29:51.818935 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqthf" podUID="79a6da92-25f7-40b3-a880-7f6f766b31fd" Apr 30 03:29:51.967899 containerd[1693]: time="2025-04-30T03:29:51.967848490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:51.970547 containerd[1693]: time="2025-04-30T03:29:51.970481103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:29:51.974384 containerd[1693]: time="2025-04-30T03:29:51.973119617Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:51.978827 containerd[1693]: time="2025-04-30T03:29:51.978746546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:51.979747 containerd[1693]: time="2025-04-30T03:29:51.979627051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 4.067003486s" Apr 30 03:29:51.979747 containerd[1693]: time="2025-04-30T03:29:51.979660951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:29:51.982321 containerd[1693]: time="2025-04-30T03:29:51.982225164Z" level=info msg="CreateContainer within sandbox \"14497a0495bf99e0ccfa0f3f4faf478737e8e9e97c6062d18b5c60474bf99b9e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:29:52.015871 containerd[1693]: time="2025-04-30T03:29:52.015838038Z" level=info msg="CreateContainer within sandbox \"14497a0495bf99e0ccfa0f3f4faf478737e8e9e97c6062d18b5c60474bf99b9e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"26a1a6f481ddedc18c8ccdf4f18d4199a56e4eeb4f6fe49dea3550ef2467cd50\"" Apr 30 03:29:52.017378 containerd[1693]: time="2025-04-30T03:29:52.016215340Z" level=info msg="StartContainer for \"26a1a6f481ddedc18c8ccdf4f18d4199a56e4eeb4f6fe49dea3550ef2467cd50\"" Apr 30 03:29:52.047319 systemd[1]: run-containerd-runc-k8s.io-26a1a6f481ddedc18c8ccdf4f18d4199a56e4eeb4f6fe49dea3550ef2467cd50-runc.hLcVSu.mount: Deactivated successfully. Apr 30 03:29:52.056500 systemd[1]: Started cri-containerd-26a1a6f481ddedc18c8ccdf4f18d4199a56e4eeb4f6fe49dea3550ef2467cd50.scope - libcontainer container 26a1a6f481ddedc18c8ccdf4f18d4199a56e4eeb4f6fe49dea3550ef2467cd50. Apr 30 03:29:52.082335 containerd[1693]: time="2025-04-30T03:29:52.082142782Z" level=info msg="StartContainer for \"26a1a6f481ddedc18c8ccdf4f18d4199a56e4eeb4f6fe49dea3550ef2467cd50\" returns successfully" Apr 30 03:29:53.590027 systemd[1]: cri-containerd-26a1a6f481ddedc18c8ccdf4f18d4199a56e4eeb4f6fe49dea3550ef2467cd50.scope: Deactivated successfully. Apr 30 03:29:53.609922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26a1a6f481ddedc18c8ccdf4f18d4199a56e4eeb4f6fe49dea3550ef2467cd50-rootfs.mount: Deactivated successfully. Apr 30 03:29:53.677479 kubelet[3184]: I0430 03:29:53.677160 3184 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 03:29:53.717432 systemd[1]: Created slice kubepods-burstable-podac7b6b9e_a78e_4c10_8774_981b5e31a478.slice - libcontainer container kubepods-burstable-podac7b6b9e_a78e_4c10_8774_981b5e31a478.slice. Apr 30 03:29:54.128870 containerd[1693]: time="2025-04-30T03:29:54.127892893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xqthf,Uid:79a6da92-25f7-40b3-a880-7f6f766b31fd,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:54.129288 kubelet[3184]: W0430 03:29:53.727022 3184 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.3-a-a5554f61da" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.3-a-a5554f61da' and this object Apr 30 03:29:54.129288 kubelet[3184]: E0430 03:29:53.727151 3184 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081.3.3-a-a5554f61da\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.3-a-a5554f61da' and this object" logger="UnhandledError" Apr 30 03:29:54.129288 kubelet[3184]: I0430 03:29:53.811005 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49-calico-apiserver-certs\") pod \"calico-apiserver-5df5fd9db9-8qshg\" (UID: \"8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49\") " pod="calico-apiserver/calico-apiserver-5df5fd9db9-8qshg" Apr 30 03:29:54.129288 kubelet[3184]: I0430 03:29:53.811123 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qd86\" (UniqueName: \"kubernetes.io/projected/d053a264-e44d-4450-bd67-987ac2ab6edc-kube-api-access-9qd86\") pod \"calico-kube-controllers-89d6c9f55-qzrp4\" (UID: \"d053a264-e44d-4450-bd67-987ac2ab6edc\") " pod="calico-system/calico-kube-controllers-89d6c9f55-qzrp4" Apr 30 03:29:53.733912 systemd[1]: Created slice kubepods-burstable-pod7af10b02_117f_4e7d_ab6d_30d146cf4d03.slice - libcontainer container kubepods-burstable-pod7af10b02_117f_4e7d_ab6d_30d146cf4d03.slice. Apr 30 03:29:54.132749 kubelet[3184]: I0430 03:29:53.811199 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c40e58a4-a506-47e3-a7c8-b9609b315d66-calico-apiserver-certs\") pod \"calico-apiserver-5df5fd9db9-tlbv5\" (UID: \"c40e58a4-a506-47e3-a7c8-b9609b315d66\") " pod="calico-apiserver/calico-apiserver-5df5fd9db9-tlbv5" Apr 30 03:29:54.132749 kubelet[3184]: I0430 03:29:53.811226 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac7b6b9e-a78e-4c10-8774-981b5e31a478-config-volume\") pod \"coredns-668d6bf9bc-xfl4g\" (UID: \"ac7b6b9e-a78e-4c10-8774-981b5e31a478\") " pod="kube-system/coredns-668d6bf9bc-xfl4g" Apr 30 03:29:54.132749 kubelet[3184]: I0430 03:29:53.811244 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7af10b02-117f-4e7d-ab6d-30d146cf4d03-config-volume\") pod \"coredns-668d6bf9bc-2l4hw\" (UID: \"7af10b02-117f-4e7d-ab6d-30d146cf4d03\") " pod="kube-system/coredns-668d6bf9bc-2l4hw" Apr 30 03:29:54.132749 kubelet[3184]: I0430 03:29:53.811258 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d053a264-e44d-4450-bd67-987ac2ab6edc-tigera-ca-bundle\") pod \"calico-kube-controllers-89d6c9f55-qzrp4\" (UID: \"d053a264-e44d-4450-bd67-987ac2ab6edc\") " pod="calico-system/calico-kube-controllers-89d6c9f55-qzrp4" Apr 30 03:29:54.132749 kubelet[3184]: I0430 03:29:53.811274 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zxf6\" (UniqueName: \"kubernetes.io/projected/8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49-kube-api-access-5zxf6\") pod \"calico-apiserver-5df5fd9db9-8qshg\" (UID: \"8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49\") " pod="calico-apiserver/calico-apiserver-5df5fd9db9-8qshg" Apr 30 03:29:53.744475 systemd[1]: Created slice kubepods-besteffort-pod8a2b98b3_63eb_4ce1_b8d8_aa02372a6b49.slice - libcontainer container kubepods-besteffort-pod8a2b98b3_63eb_4ce1_b8d8_aa02372a6b49.slice. Apr 30 03:29:54.133136 kubelet[3184]: I0430 03:29:53.811293 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jbxs\" (UniqueName: \"kubernetes.io/projected/7af10b02-117f-4e7d-ab6d-30d146cf4d03-kube-api-access-8jbxs\") pod \"coredns-668d6bf9bc-2l4hw\" (UID: \"7af10b02-117f-4e7d-ab6d-30d146cf4d03\") " pod="kube-system/coredns-668d6bf9bc-2l4hw" Apr 30 03:29:54.133136 kubelet[3184]: I0430 03:29:53.811319 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbgmq\" (UniqueName: \"kubernetes.io/projected/c40e58a4-a506-47e3-a7c8-b9609b315d66-kube-api-access-zbgmq\") pod \"calico-apiserver-5df5fd9db9-tlbv5\" (UID: \"c40e58a4-a506-47e3-a7c8-b9609b315d66\") " pod="calico-apiserver/calico-apiserver-5df5fd9db9-tlbv5" Apr 30 03:29:54.133136 kubelet[3184]: I0430 03:29:53.811375 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fhzc\" (UniqueName: \"kubernetes.io/projected/ac7b6b9e-a78e-4c10-8774-981b5e31a478-kube-api-access-4fhzc\") pod \"coredns-668d6bf9bc-xfl4g\" (UID: \"ac7b6b9e-a78e-4c10-8774-981b5e31a478\") " pod="kube-system/coredns-668d6bf9bc-xfl4g" Apr 30 03:29:53.761132 systemd[1]: Created slice kubepods-besteffort-podd053a264_e44d_4450_bd67_987ac2ab6edc.slice - libcontainer container kubepods-besteffort-podd053a264_e44d_4450_bd67_987ac2ab6edc.slice. Apr 30 03:29:53.772209 systemd[1]: Created slice kubepods-besteffort-podc40e58a4_a506_47e3_a7c8_b9609b315d66.slice - libcontainer container kubepods-besteffort-podc40e58a4_a506_47e3_a7c8_b9609b315d66.slice. Apr 30 03:29:53.823799 systemd[1]: Created slice kubepods-besteffort-pod79a6da92_25f7_40b3_a880_7f6f766b31fd.slice - libcontainer container kubepods-besteffort-pod79a6da92_25f7_40b3_a880_7f6f766b31fd.slice. Apr 30 03:29:54.428597 containerd[1693]: time="2025-04-30T03:29:54.428447452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfl4g,Uid:ac7b6b9e-a78e-4c10-8774-981b5e31a478,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:54.437507 containerd[1693]: time="2025-04-30T03:29:54.437461899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2l4hw,Uid:7af10b02-117f-4e7d-ab6d-30d146cf4d03,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:54.447514 containerd[1693]: time="2025-04-30T03:29:54.447485351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-89d6c9f55-qzrp4,Uid:d053a264-e44d-4450-bd67-987ac2ab6edc,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:54.738191 containerd[1693]: time="2025-04-30T03:29:54.737705756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df5fd9db9-8qshg,Uid:8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:54.740199 containerd[1693]: time="2025-04-30T03:29:54.740168669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df5fd9db9-tlbv5,Uid:c40e58a4-a506-47e3-a7c8-b9609b315d66,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:55.382179 containerd[1693]: time="2025-04-30T03:29:55.382088398Z" level=info msg="shim disconnected" id=26a1a6f481ddedc18c8ccdf4f18d4199a56e4eeb4f6fe49dea3550ef2467cd50 namespace=k8s.io Apr 30 03:29:55.382179 containerd[1693]: time="2025-04-30T03:29:55.382163299Z" level=warning msg="cleaning up after shim disconnected" id=26a1a6f481ddedc18c8ccdf4f18d4199a56e4eeb4f6fe49dea3550ef2467cd50 namespace=k8s.io Apr 30 03:29:55.382179 containerd[1693]: time="2025-04-30T03:29:55.382189999Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:55.706923 containerd[1693]: time="2025-04-30T03:29:55.706727182Z" level=error msg="Failed to destroy network for sandbox \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.709896 containerd[1693]: time="2025-04-30T03:29:55.709852298Z" level=error msg="encountered an error cleaning up failed sandbox \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.710308 containerd[1693]: time="2025-04-30T03:29:55.710264201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-89d6c9f55-qzrp4,Uid:d053a264-e44d-4450-bd67-987ac2ab6edc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.711719 kubelet[3184]: E0430 03:29:55.711645 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.712337 kubelet[3184]: E0430 03:29:55.711733 3184 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-89d6c9f55-qzrp4" Apr 30 03:29:55.712337 kubelet[3184]: E0430 03:29:55.711762 3184 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-89d6c9f55-qzrp4" Apr 30 03:29:55.712337 kubelet[3184]: E0430 03:29:55.712139 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-89d6c9f55-qzrp4_calico-system(d053a264-e44d-4450-bd67-987ac2ab6edc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-89d6c9f55-qzrp4_calico-system(d053a264-e44d-4450-bd67-987ac2ab6edc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-89d6c9f55-qzrp4" podUID="d053a264-e44d-4450-bd67-987ac2ab6edc" Apr 30 03:29:55.753654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a-shm.mount: Deactivated successfully. Apr 30 03:29:55.762454 containerd[1693]: time="2025-04-30T03:29:55.761903068Z" level=error msg="Failed to destroy network for sandbox \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.763374 containerd[1693]: time="2025-04-30T03:29:55.763289876Z" level=error msg="Failed to destroy network for sandbox \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.765721 containerd[1693]: time="2025-04-30T03:29:55.765683988Z" level=error msg="encountered an error cleaning up failed sandbox \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.765813 containerd[1693]: time="2025-04-30T03:29:55.765754888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xqthf,Uid:79a6da92-25f7-40b3-a880-7f6f766b31fd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.766336 kubelet[3184]: E0430 03:29:55.765937 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.766336 kubelet[3184]: E0430 03:29:55.765994 3184 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xqthf" Apr 30 03:29:55.766336 kubelet[3184]: E0430 03:29:55.766021 3184 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xqthf" Apr 30 03:29:55.766620 kubelet[3184]: E0430 03:29:55.766077 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xqthf_calico-system(79a6da92-25f7-40b3-a880-7f6f766b31fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xqthf_calico-system(79a6da92-25f7-40b3-a880-7f6f766b31fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xqthf" podUID="79a6da92-25f7-40b3-a880-7f6f766b31fd" Apr 30 03:29:55.768050 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e-shm.mount: Deactivated successfully. Apr 30 03:29:55.769436 kubelet[3184]: E0430 03:29:55.769012 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.769436 kubelet[3184]: E0430 03:29:55.769060 3184 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2l4hw" Apr 30 03:29:55.769436 kubelet[3184]: E0430 03:29:55.769085 3184 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2l4hw" Apr 30 03:29:55.769610 containerd[1693]: time="2025-04-30T03:29:55.768496003Z" level=error msg="encountered an error cleaning up failed sandbox \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.769610 containerd[1693]: time="2025-04-30T03:29:55.768547303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2l4hw,Uid:7af10b02-117f-4e7d-ab6d-30d146cf4d03,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.769610 containerd[1693]: time="2025-04-30T03:29:55.769064606Z" level=error msg="Failed to destroy network for sandbox \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.769841 kubelet[3184]: E0430 03:29:55.769132 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2l4hw_kube-system(7af10b02-117f-4e7d-ab6d-30d146cf4d03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2l4hw_kube-system(7af10b02-117f-4e7d-ab6d-30d146cf4d03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2l4hw" podUID="7af10b02-117f-4e7d-ab6d-30d146cf4d03" Apr 30 03:29:55.770121 containerd[1693]: time="2025-04-30T03:29:55.770028011Z" level=error msg="Failed to destroy network for sandbox \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.770642 containerd[1693]: time="2025-04-30T03:29:55.770460213Z" level=error msg="encountered an error cleaning up failed sandbox \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.770642 containerd[1693]: time="2025-04-30T03:29:55.770513813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfl4g,Uid:ac7b6b9e-a78e-4c10-8774-981b5e31a478,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.771186 kubelet[3184]: E0430 03:29:55.770832 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.771186 kubelet[3184]: E0430 03:29:55.770874 3184 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xfl4g" Apr 30 03:29:55.771186 kubelet[3184]: E0430 03:29:55.770900 3184 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xfl4g" Apr 30 03:29:55.771373 kubelet[3184]: E0430 03:29:55.770939 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xfl4g_kube-system(ac7b6b9e-a78e-4c10-8774-981b5e31a478)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xfl4g_kube-system(ac7b6b9e-a78e-4c10-8774-981b5e31a478)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xfl4g" podUID="ac7b6b9e-a78e-4c10-8774-981b5e31a478" Apr 30 03:29:55.773357 containerd[1693]: time="2025-04-30T03:29:55.772596224Z" level=error msg="encountered an error cleaning up failed sandbox \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.773357 containerd[1693]: time="2025-04-30T03:29:55.772650324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df5fd9db9-tlbv5,Uid:c40e58a4-a506-47e3-a7c8-b9609b315d66,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.773559 kubelet[3184]: E0430 03:29:55.773218 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.773559 kubelet[3184]: E0430 03:29:55.773254 3184 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df5fd9db9-tlbv5" Apr 30 03:29:55.773559 kubelet[3184]: E0430 03:29:55.773277 3184 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df5fd9db9-tlbv5" Apr 30 03:29:55.773697 kubelet[3184]: E0430 03:29:55.773317 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5df5fd9db9-tlbv5_calico-apiserver(c40e58a4-a506-47e3-a7c8-b9609b315d66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5df5fd9db9-tlbv5_calico-apiserver(c40e58a4-a506-47e3-a7c8-b9609b315d66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df5fd9db9-tlbv5" podUID="c40e58a4-a506-47e3-a7c8-b9609b315d66" Apr 30 03:29:55.778699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c-shm.mount: Deactivated successfully. Apr 30 03:29:55.778809 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637-shm.mount: Deactivated successfully. Apr 30 03:29:55.778887 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f-shm.mount: Deactivated successfully. Apr 30 03:29:55.781890 containerd[1693]: time="2025-04-30T03:29:55.781788272Z" level=error msg="Failed to destroy network for sandbox \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.783415 containerd[1693]: time="2025-04-30T03:29:55.782789277Z" level=error msg="encountered an error cleaning up failed sandbox \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.783415 containerd[1693]: time="2025-04-30T03:29:55.782849377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df5fd9db9-8qshg,Uid:8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.784002 kubelet[3184]: E0430 03:29:55.783739 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:55.784002 kubelet[3184]: E0430 03:29:55.783778 3184 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df5fd9db9-8qshg" Apr 30 03:29:55.784002 kubelet[3184]: E0430 03:29:55.783797 3184 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df5fd9db9-8qshg" Apr 30 03:29:55.784179 kubelet[3184]: E0430 03:29:55.783835 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5df5fd9db9-8qshg_calico-apiserver(8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5df5fd9db9-8qshg_calico-apiserver(8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df5fd9db9-8qshg" podUID="8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49" Apr 30 03:29:55.786081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91-shm.mount: Deactivated successfully. Apr 30 03:29:55.928704 kubelet[3184]: I0430 03:29:55.928669 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:29:55.930212 containerd[1693]: time="2025-04-30T03:29:55.929967540Z" level=info msg="StopPodSandbox for \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\"" Apr 30 03:29:55.930380 containerd[1693]: time="2025-04-30T03:29:55.930216741Z" level=info msg="Ensure that sandbox 41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91 in task-service has been cleanup successfully" Apr 30 03:29:55.931046 kubelet[3184]: I0430 03:29:55.931023 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:29:55.932500 containerd[1693]: time="2025-04-30T03:29:55.931536348Z" level=info msg="StopPodSandbox for \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\"" Apr 30 03:29:55.932500 containerd[1693]: time="2025-04-30T03:29:55.931713149Z" level=info msg="Ensure that sandbox 0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e in task-service has been cleanup successfully" Apr 30 03:29:55.935719 kubelet[3184]: I0430 03:29:55.935275 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:29:55.936906 containerd[1693]: time="2025-04-30T03:29:55.936881976Z" level=info msg="StopPodSandbox for \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\"" Apr 30 03:29:55.937197 containerd[1693]: time="2025-04-30T03:29:55.937170977Z" level=info msg="Ensure that sandbox 4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f in task-service has been cleanup successfully" Apr 30 03:29:55.946222 containerd[1693]: time="2025-04-30T03:29:55.946137324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:29:55.948870 kubelet[3184]: I0430 03:29:55.948800 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:29:55.952454 containerd[1693]: time="2025-04-30T03:29:55.952271556Z" level=info msg="StopPodSandbox for \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\"" Apr 30 03:29:55.954407 containerd[1693]: time="2025-04-30T03:29:55.954341866Z" level=info msg="Ensure that sandbox 48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c in task-service has been cleanup successfully" Apr 30 03:29:55.966291 kubelet[3184]: I0430 03:29:55.966098 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:29:55.974114 containerd[1693]: time="2025-04-30T03:29:55.973982368Z" level=info msg="StopPodSandbox for \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\"" Apr 30 03:29:55.981199 containerd[1693]: time="2025-04-30T03:29:55.980577103Z" level=info msg="Ensure that sandbox 60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a in task-service has been cleanup successfully" Apr 30 03:29:56.011040 kubelet[3184]: I0430 03:29:56.011017 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:29:56.015404 containerd[1693]: time="2025-04-30T03:29:56.014879080Z" level=info msg="StopPodSandbox for \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\"" Apr 30 03:29:56.016955 containerd[1693]: time="2025-04-30T03:29:56.016660290Z" level=info msg="Ensure that sandbox 8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637 in task-service has been cleanup successfully" Apr 30 03:29:56.017417 containerd[1693]: time="2025-04-30T03:29:56.016852491Z" level=error msg="StopPodSandbox for \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\" failed" error="failed to destroy network for sandbox \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:56.019744 kubelet[3184]: E0430 03:29:56.019710 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:29:56.020132 kubelet[3184]: E0430 03:29:56.019897 3184 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f"} Apr 30 03:29:56.020132 kubelet[3184]: E0430 03:29:56.019970 3184 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ac7b6b9e-a78e-4c10-8774-981b5e31a478\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:56.020132 kubelet[3184]: E0430 03:29:56.019998 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ac7b6b9e-a78e-4c10-8774-981b5e31a478\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xfl4g" podUID="ac7b6b9e-a78e-4c10-8774-981b5e31a478" Apr 30 03:29:56.051448 containerd[1693]: time="2025-04-30T03:29:56.051254420Z" level=error msg="StopPodSandbox for \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\" failed" error="failed to destroy network for sandbox \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:56.052103 kubelet[3184]: E0430 03:29:56.051844 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:29:56.052103 kubelet[3184]: E0430 03:29:56.051890 3184 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91"} Apr 30 03:29:56.052103 kubelet[3184]: E0430 03:29:56.051934 3184 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:56.052103 kubelet[3184]: E0430 03:29:56.051972 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df5fd9db9-8qshg" podUID="8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49" Apr 30 03:29:56.057662 containerd[1693]: time="2025-04-30T03:29:56.057528315Z" level=error msg="StopPodSandbox for \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\" failed" error="failed to destroy network for sandbox \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:56.058122 kubelet[3184]: E0430 03:29:56.057879 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:29:56.058122 kubelet[3184]: E0430 03:29:56.057944 3184 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e"} Apr 30 03:29:56.058122 kubelet[3184]: E0430 03:29:56.057981 3184 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79a6da92-25f7-40b3-a880-7f6f766b31fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:56.058122 kubelet[3184]: E0430 03:29:56.058012 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79a6da92-25f7-40b3-a880-7f6f766b31fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xqthf" podUID="79a6da92-25f7-40b3-a880-7f6f766b31fd" Apr 30 03:29:56.084452 containerd[1693]: time="2025-04-30T03:29:56.084405709Z" level=error msg="StopPodSandbox for \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\" failed" error="failed to destroy network for sandbox \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:56.085250 kubelet[3184]: E0430 03:29:56.084691 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:29:56.085250 kubelet[3184]: E0430 03:29:56.084763 3184 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c"} Apr 30 03:29:56.085250 kubelet[3184]: E0430 03:29:56.084803 3184 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c40e58a4-a506-47e3-a7c8-b9609b315d66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:56.085250 kubelet[3184]: E0430 03:29:56.084834 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c40e58a4-a506-47e3-a7c8-b9609b315d66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df5fd9db9-tlbv5" podUID="c40e58a4-a506-47e3-a7c8-b9609b315d66" Apr 30 03:29:56.085773 containerd[1693]: time="2025-04-30T03:29:56.085729292Z" level=error msg="StopPodSandbox for \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\" failed" error="failed to destroy network for sandbox \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:56.086164 kubelet[3184]: E0430 03:29:56.086010 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:29:56.086164 kubelet[3184]: E0430 03:29:56.086050 3184 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637"} Apr 30 03:29:56.086164 kubelet[3184]: E0430 03:29:56.086091 3184 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7af10b02-117f-4e7d-ab6d-30d146cf4d03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:56.086164 kubelet[3184]: E0430 03:29:56.086119 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7af10b02-117f-4e7d-ab6d-30d146cf4d03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2l4hw" podUID="7af10b02-117f-4e7d-ab6d-30d146cf4d03" Apr 30 03:29:56.088820 containerd[1693]: time="2025-04-30T03:29:56.088776784Z" level=error msg="StopPodSandbox for \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\" failed" error="failed to destroy network for sandbox \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:56.089005 kubelet[3184]: E0430 03:29:56.088952 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:29:56.089078 kubelet[3184]: E0430 03:29:56.089013 3184 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a"} Apr 30 03:29:56.089078 kubelet[3184]: E0430 03:29:56.089054 3184 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d053a264-e44d-4450-bd67-987ac2ab6edc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:56.089175 kubelet[3184]: E0430 03:29:56.089093 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d053a264-e44d-4450-bd67-987ac2ab6edc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-89d6c9f55-qzrp4" podUID="d053a264-e44d-4450-bd67-987ac2ab6edc" Apr 30 03:30:03.690939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705038688.mount: Deactivated successfully. Apr 30 03:30:03.734705 containerd[1693]: time="2025-04-30T03:30:03.734587022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:03.736739 containerd[1693]: time="2025-04-30T03:30:03.736690528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:30:03.740673 containerd[1693]: time="2025-04-30T03:30:03.740625040Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:03.744349 containerd[1693]: time="2025-04-30T03:30:03.744298652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:03.744875 containerd[1693]: time="2025-04-30T03:30:03.744841253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.798663429s" Apr 30 03:30:03.745073 containerd[1693]: time="2025-04-30T03:30:03.744962754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:30:03.766394 containerd[1693]: time="2025-04-30T03:30:03.762963109Z" level=info msg="CreateContainer within sandbox \"14497a0495bf99e0ccfa0f3f4faf478737e8e9e97c6062d18b5c60474bf99b9e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:30:03.809165 containerd[1693]: time="2025-04-30T03:30:03.809131651Z" level=info msg="CreateContainer within sandbox \"14497a0495bf99e0ccfa0f3f4faf478737e8e9e97c6062d18b5c60474bf99b9e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7034f4e008adf430103835a1dec0c29f0b935ec273f04d97933b27a1cb219b90\"" Apr 30 03:30:03.810984 containerd[1693]: time="2025-04-30T03:30:03.809548152Z" level=info msg="StartContainer for \"7034f4e008adf430103835a1dec0c29f0b935ec273f04d97933b27a1cb219b90\"" Apr 30 03:30:03.834761 systemd[1]: Started cri-containerd-7034f4e008adf430103835a1dec0c29f0b935ec273f04d97933b27a1cb219b90.scope - libcontainer container 7034f4e008adf430103835a1dec0c29f0b935ec273f04d97933b27a1cb219b90. Apr 30 03:30:03.863520 containerd[1693]: time="2025-04-30T03:30:03.863480318Z" level=info msg="StartContainer for \"7034f4e008adf430103835a1dec0c29f0b935ec273f04d97933b27a1cb219b90\" returns successfully" Apr 30 03:30:04.012063 kubelet[3184]: I0430 03:30:04.011963 3184 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:04.080999 kubelet[3184]: I0430 03:30:04.080935 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w4g96" podStartSLOduration=1.503544356 podStartE2EDuration="23.080805148s" podCreationTimestamp="2025-04-30 03:29:41 +0000 UTC" firstStartedPulling="2025-04-30 03:29:42.170203669 +0000 UTC m=+17.433662123" lastFinishedPulling="2025-04-30 03:30:03.747464461 +0000 UTC m=+39.010922915" observedRunningTime="2025-04-30 03:30:04.078622041 +0000 UTC m=+39.342080495" watchObservedRunningTime="2025-04-30 03:30:04.080805148 +0000 UTC m=+39.344263502" Apr 30 03:30:04.095754 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:30:04.095881 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Apr 30 03:30:05.747417 kernel: bpftool[4477]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:30:06.035379 systemd-networkd[1484]: vxlan.calico: Link UP Apr 30 03:30:06.035391 systemd-networkd[1484]: vxlan.calico: Gained carrier Apr 30 03:30:06.821358 containerd[1693]: time="2025-04-30T03:30:06.821296666Z" level=info msg="StopPodSandbox for \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\"" Apr 30 03:30:06.901237 containerd[1693]: 2025-04-30 03:30:06.868 [INFO][4563] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:30:06.901237 containerd[1693]: 2025-04-30 03:30:06.869 [INFO][4563] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" iface="eth0" netns="/var/run/netns/cni-c827ab23-428a-5ced-f00f-fd187f157670" Apr 30 03:30:06.901237 containerd[1693]: 2025-04-30 03:30:06.869 [INFO][4563] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" iface="eth0" netns="/var/run/netns/cni-c827ab23-428a-5ced-f00f-fd187f157670" Apr 30 03:30:06.901237 containerd[1693]: 2025-04-30 03:30:06.869 [INFO][4563] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" iface="eth0" netns="/var/run/netns/cni-c827ab23-428a-5ced-f00f-fd187f157670" Apr 30 03:30:06.901237 containerd[1693]: 2025-04-30 03:30:06.869 [INFO][4563] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:30:06.901237 containerd[1693]: 2025-04-30 03:30:06.869 [INFO][4563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:30:06.901237 containerd[1693]: 2025-04-30 03:30:06.891 [INFO][4570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" HandleID="k8s-pod-network.48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:06.901237 containerd[1693]: 2025-04-30 03:30:06.891 [INFO][4570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:06.901237 containerd[1693]: 2025-04-30 03:30:06.891 [INFO][4570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:06.901237 containerd[1693]: 2025-04-30 03:30:06.897 [WARNING][4570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" HandleID="k8s-pod-network.48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:06.901237 containerd[1693]: 2025-04-30 03:30:06.897 [INFO][4570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" HandleID="k8s-pod-network.48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:06.901237 containerd[1693]: 2025-04-30 03:30:06.898 [INFO][4570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:06.901237 containerd[1693]: 2025-04-30 03:30:06.900 [INFO][4563] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:30:06.907462 containerd[1693]: time="2025-04-30T03:30:06.901478488Z" level=info msg="TearDown network for sandbox \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\" successfully" Apr 30 03:30:06.907462 containerd[1693]: time="2025-04-30T03:30:06.902428893Z" level=info msg="StopPodSandbox for \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\" returns successfully" Apr 30 03:30:06.907462 containerd[1693]: time="2025-04-30T03:30:06.903946301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df5fd9db9-tlbv5,Uid:c40e58a4-a506-47e3-a7c8-b9609b315d66,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:30:06.905218 systemd[1]: run-netns-cni\x2dc827ab23\x2d428a\x2d5ced\x2df00f\x2dfd187f157670.mount: Deactivated successfully. Apr 30 03:30:07.040347 systemd-networkd[1484]: calib7581b2ba80: Link UP Apr 30 03:30:07.040649 systemd-networkd[1484]: calib7581b2ba80: Gained carrier Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:06.975 [INFO][4578] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0 calico-apiserver-5df5fd9db9- calico-apiserver c40e58a4-a506-47e3-a7c8-b9609b315d66 774 0 2025-04-30 03:29:41 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5df5fd9db9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-a5554f61da calico-apiserver-5df5fd9db9-tlbv5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib7581b2ba80 [] []}} ContainerID="da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-tlbv5" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-" Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:06.978 [INFO][4578] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-tlbv5" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.002 [INFO][4589] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" HandleID="k8s-pod-network.da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.010 [INFO][4589] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" HandleID="k8s-pod-network.da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000311280), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-a5554f61da", "pod":"calico-apiserver-5df5fd9db9-tlbv5", "timestamp":"2025-04-30 03:30:07.002543119 +0000 UTC"}, Hostname:"ci-4081.3.3-a-a5554f61da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.010 [INFO][4589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.010 [INFO][4589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.010 [INFO][4589] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-a5554f61da' Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.012 [INFO][4589] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.015 [INFO][4589] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.018 [INFO][4589] ipam/ipam.go 489: Trying affinity for 192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.020 [INFO][4589] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.022 [INFO][4589] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.022 [INFO][4589] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.023 [INFO][4589] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2 Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.030 [INFO][4589] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.035 [INFO][4589] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.129/26] block=192.168.107.128/26 handle="k8s-pod-network.da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.035 [INFO][4589] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.129/26] handle="k8s-pod-network.da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.035 [INFO][4589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:07.061087 containerd[1693]: 2025-04-30 03:30:07.035 [INFO][4589] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.129/26] IPv6=[] ContainerID="da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" HandleID="k8s-pod-network.da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:07.062881 containerd[1693]: 2025-04-30 03:30:07.037 [INFO][4578] cni-plugin/k8s.go 386: Populated endpoint ContainerID="da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-tlbv5" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0", GenerateName:"calico-apiserver-5df5fd9db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c40e58a4-a506-47e3-a7c8-b9609b315d66", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df5fd9db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"", Pod:"calico-apiserver-5df5fd9db9-tlbv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib7581b2ba80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:07.062881 containerd[1693]: 2025-04-30 03:30:07.037 [INFO][4578] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.129/32] ContainerID="da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-tlbv5" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:07.062881 containerd[1693]: 2025-04-30 03:30:07.037 [INFO][4578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7581b2ba80 ContainerID="da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-tlbv5" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:07.062881 containerd[1693]: 2025-04-30 03:30:07.040 [INFO][4578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-tlbv5" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:07.062881 containerd[1693]: 2025-04-30 03:30:07.040 [INFO][4578] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-tlbv5" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0", GenerateName:"calico-apiserver-5df5fd9db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c40e58a4-a506-47e3-a7c8-b9609b315d66", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df5fd9db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2", Pod:"calico-apiserver-5df5fd9db9-tlbv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib7581b2ba80", MAC:"0e:91:89:ce:45:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:07.062881 containerd[1693]: 2025-04-30 03:30:07.058 [INFO][4578] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-tlbv5" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:07.086676 containerd[1693]: time="2025-04-30T03:30:07.086255759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:07.086676 containerd[1693]: time="2025-04-30T03:30:07.086319959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:07.086676 containerd[1693]: time="2025-04-30T03:30:07.086335759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:07.086676 containerd[1693]: time="2025-04-30T03:30:07.086562760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:07.112513 systemd[1]: Started cri-containerd-da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2.scope - libcontainer container da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2. Apr 30 03:30:07.149135 containerd[1693]: time="2025-04-30T03:30:07.149092189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df5fd9db9-tlbv5,Uid:c40e58a4-a506-47e3-a7c8-b9609b315d66,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2\"" Apr 30 03:30:07.150767 containerd[1693]: time="2025-04-30T03:30:07.150694497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:30:07.552611 systemd-networkd[1484]: vxlan.calico: Gained IPv6LL Apr 30 03:30:08.703743 systemd-networkd[1484]: calib7581b2ba80: Gained IPv6LL Apr 30 03:30:08.820741 containerd[1693]: time="2025-04-30T03:30:08.819716368Z" level=info msg="StopPodSandbox for \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\"" Apr 30 03:30:08.911457 containerd[1693]: 2025-04-30 03:30:08.864 [INFO][4664] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:30:08.911457 containerd[1693]: 2025-04-30 03:30:08.864 [INFO][4664] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" iface="eth0" netns="/var/run/netns/cni-ae1e2fb7-ad29-86ae-bb9d-580eacac7976" Apr 30 03:30:08.911457 containerd[1693]: 2025-04-30 03:30:08.865 [INFO][4664] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" iface="eth0" netns="/var/run/netns/cni-ae1e2fb7-ad29-86ae-bb9d-580eacac7976" Apr 30 03:30:08.911457 containerd[1693]: 2025-04-30 03:30:08.865 [INFO][4664] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" iface="eth0" netns="/var/run/netns/cni-ae1e2fb7-ad29-86ae-bb9d-580eacac7976" Apr 30 03:30:08.911457 containerd[1693]: 2025-04-30 03:30:08.865 [INFO][4664] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:30:08.911457 containerd[1693]: 2025-04-30 03:30:08.865 [INFO][4664] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:30:08.911457 containerd[1693]: 2025-04-30 03:30:08.898 [INFO][4671] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" HandleID="k8s-pod-network.60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:08.911457 containerd[1693]: 2025-04-30 03:30:08.898 [INFO][4671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:08.911457 containerd[1693]: 2025-04-30 03:30:08.898 [INFO][4671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:08.911457 containerd[1693]: 2025-04-30 03:30:08.907 [WARNING][4671] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" HandleID="k8s-pod-network.60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:08.911457 containerd[1693]: 2025-04-30 03:30:08.907 [INFO][4671] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" HandleID="k8s-pod-network.60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:08.911457 containerd[1693]: 2025-04-30 03:30:08.909 [INFO][4671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:08.911457 containerd[1693]: 2025-04-30 03:30:08.910 [INFO][4664] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:30:08.912447 containerd[1693]: time="2025-04-30T03:30:08.912277154Z" level=info msg="TearDown network for sandbox \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\" successfully" Apr 30 03:30:08.912447 containerd[1693]: time="2025-04-30T03:30:08.912315354Z" level=info msg="StopPodSandbox for \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\" returns successfully" Apr 30 03:30:08.915292 containerd[1693]: time="2025-04-30T03:30:08.914715067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-89d6c9f55-qzrp4,Uid:d053a264-e44d-4450-bd67-987ac2ab6edc,Namespace:calico-system,Attempt:1,}" Apr 30 03:30:08.916673 systemd[1]: run-netns-cni\x2dae1e2fb7\x2dad29\x2d86ae\x2dbb9d\x2d580eacac7976.mount: Deactivated successfully. Apr 30 03:30:09.071172 systemd-networkd[1484]: cali7a32d6772b6: Link UP Apr 30 03:30:09.072465 systemd-networkd[1484]: cali7a32d6772b6: Gained carrier Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:08.997 [INFO][4678] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0 calico-kube-controllers-89d6c9f55- calico-system d053a264-e44d-4450-bd67-987ac2ab6edc 783 0 2025-04-30 03:29:41 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:89d6c9f55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-a-a5554f61da calico-kube-controllers-89d6c9f55-qzrp4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7a32d6772b6 [] []}} ContainerID="cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" Namespace="calico-system" Pod="calico-kube-controllers-89d6c9f55-qzrp4" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-" Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:08.998 [INFO][4678] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" Namespace="calico-system" Pod="calico-kube-controllers-89d6c9f55-qzrp4" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.026 [INFO][4690] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" HandleID="k8s-pod-network.cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.038 [INFO][4690] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" HandleID="k8s-pod-network.cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003322c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-a5554f61da", "pod":"calico-kube-controllers-89d6c9f55-qzrp4", "timestamp":"2025-04-30 03:30:09.026110352 +0000 UTC"}, Hostname:"ci-4081.3.3-a-a5554f61da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.038 [INFO][4690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.038 [INFO][4690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.038 [INFO][4690] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-a5554f61da' Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.041 [INFO][4690] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.044 [INFO][4690] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.049 [INFO][4690] ipam/ipam.go 489: Trying affinity for 192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.051 [INFO][4690] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.053 [INFO][4690] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.053 [INFO][4690] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.054 [INFO][4690] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76 Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.061 [INFO][4690] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.066 [INFO][4690] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.130/26] block=192.168.107.128/26 handle="k8s-pod-network.cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.066 [INFO][4690] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.130/26] handle="k8s-pod-network.cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.066 [INFO][4690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:09.089247 containerd[1693]: 2025-04-30 03:30:09.066 [INFO][4690] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.130/26] IPv6=[] ContainerID="cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" HandleID="k8s-pod-network.cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:09.090563 containerd[1693]: 2025-04-30 03:30:09.067 [INFO][4678] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" Namespace="calico-system" Pod="calico-kube-controllers-89d6c9f55-qzrp4" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0", GenerateName:"calico-kube-controllers-89d6c9f55-", Namespace:"calico-system", SelfLink:"", UID:"d053a264-e44d-4450-bd67-987ac2ab6edc", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"89d6c9f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"", Pod:"calico-kube-controllers-89d6c9f55-qzrp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7a32d6772b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:09.090563 containerd[1693]: 2025-04-30 03:30:09.068 [INFO][4678] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.130/32] ContainerID="cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" Namespace="calico-system" Pod="calico-kube-controllers-89d6c9f55-qzrp4" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:09.090563 containerd[1693]: 2025-04-30 03:30:09.068 [INFO][4678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a32d6772b6 ContainerID="cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" Namespace="calico-system" Pod="calico-kube-controllers-89d6c9f55-qzrp4" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:09.090563 containerd[1693]: 2025-04-30 03:30:09.072 [INFO][4678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" Namespace="calico-system" Pod="calico-kube-controllers-89d6c9f55-qzrp4" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:09.090563 containerd[1693]: 2025-04-30 03:30:09.073 [INFO][4678] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" Namespace="calico-system" Pod="calico-kube-controllers-89d6c9f55-qzrp4" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0", GenerateName:"calico-kube-controllers-89d6c9f55-", Namespace:"calico-system", SelfLink:"", UID:"d053a264-e44d-4450-bd67-987ac2ab6edc", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"89d6c9f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76", Pod:"calico-kube-controllers-89d6c9f55-qzrp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7a32d6772b6", MAC:"f6:21:cf:ac:20:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:09.090563 containerd[1693]: 2025-04-30 03:30:09.086 [INFO][4678] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76" Namespace="calico-system" Pod="calico-kube-controllers-89d6c9f55-qzrp4" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:09.125624 containerd[1693]: time="2025-04-30T03:30:09.125538775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:09.126340 containerd[1693]: time="2025-04-30T03:30:09.125645975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:09.126340 containerd[1693]: time="2025-04-30T03:30:09.125742676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:09.126599 containerd[1693]: time="2025-04-30T03:30:09.126561180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:09.158783 systemd[1]: Started cri-containerd-cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76.scope - libcontainer container cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76. Apr 30 03:30:09.201652 containerd[1693]: time="2025-04-30T03:30:09.201617775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-89d6c9f55-qzrp4,Uid:d053a264-e44d-4450-bd67-987ac2ab6edc,Namespace:calico-system,Attempt:1,} returns sandbox id \"cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76\"" Apr 30 03:30:09.820690 containerd[1693]: time="2025-04-30T03:30:09.820320526Z" level=info msg="StopPodSandbox for \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\"" Apr 30 03:30:09.821005 containerd[1693]: time="2025-04-30T03:30:09.820976429Z" level=info msg="StopPodSandbox for \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\"" Apr 30 03:30:09.833997 containerd[1693]: time="2025-04-30T03:30:09.833960197Z" level=info msg="StopPodSandbox for \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\"" Apr 30 03:30:10.037795 containerd[1693]: 2025-04-30 03:30:09.950 [INFO][4793] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:30:10.037795 containerd[1693]: 2025-04-30 03:30:09.950 [INFO][4793] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" iface="eth0" netns="/var/run/netns/cni-dae23b80-4d74-ca0f-563c-a97e2dc656c7" Apr 30 03:30:10.037795 containerd[1693]: 2025-04-30 03:30:09.950 [INFO][4793] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" iface="eth0" netns="/var/run/netns/cni-dae23b80-4d74-ca0f-563c-a97e2dc656c7" Apr 30 03:30:10.037795 containerd[1693]: 2025-04-30 03:30:09.951 [INFO][4793] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" iface="eth0" netns="/var/run/netns/cni-dae23b80-4d74-ca0f-563c-a97e2dc656c7" Apr 30 03:30:10.037795 containerd[1693]: 2025-04-30 03:30:09.951 [INFO][4793] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:30:10.037795 containerd[1693]: 2025-04-30 03:30:09.951 [INFO][4793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:30:10.037795 containerd[1693]: 2025-04-30 03:30:10.020 [INFO][4811] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" HandleID="k8s-pod-network.0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Workload="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:10.037795 containerd[1693]: 2025-04-30 03:30:10.020 [INFO][4811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:10.037795 containerd[1693]: 2025-04-30 03:30:10.020 [INFO][4811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:10.037795 containerd[1693]: 2025-04-30 03:30:10.031 [WARNING][4811] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" HandleID="k8s-pod-network.0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Workload="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:10.037795 containerd[1693]: 2025-04-30 03:30:10.031 [INFO][4811] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" HandleID="k8s-pod-network.0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Workload="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:10.037795 containerd[1693]: 2025-04-30 03:30:10.033 [INFO][4811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:10.037795 containerd[1693]: 2025-04-30 03:30:10.035 [INFO][4793] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:30:10.045348 containerd[1693]: time="2025-04-30T03:30:10.044443203Z" level=info msg="TearDown network for sandbox \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\" successfully" Apr 30 03:30:10.045348 containerd[1693]: time="2025-04-30T03:30:10.044482004Z" level=info msg="StopPodSandbox for \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\" returns successfully" Apr 30 03:30:10.046099 containerd[1693]: time="2025-04-30T03:30:10.046067212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xqthf,Uid:79a6da92-25f7-40b3-a880-7f6f766b31fd,Namespace:calico-system,Attempt:1,}" Apr 30 03:30:10.046912 systemd[1]: run-netns-cni\x2ddae23b80\x2d4d74\x2dca0f\x2d563c\x2da97e2dc656c7.mount: Deactivated successfully. Apr 30 03:30:10.079336 containerd[1693]: 2025-04-30 03:30:09.979 [INFO][4792] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:30:10.079336 containerd[1693]: 2025-04-30 03:30:09.979 [INFO][4792] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" iface="eth0" netns="/var/run/netns/cni-f647596b-9b80-7e3c-d270-5d7e5591f034" Apr 30 03:30:10.079336 containerd[1693]: 2025-04-30 03:30:09.979 [INFO][4792] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" iface="eth0" netns="/var/run/netns/cni-f647596b-9b80-7e3c-d270-5d7e5591f034" Apr 30 03:30:10.079336 containerd[1693]: 2025-04-30 03:30:09.983 [INFO][4792] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" iface="eth0" netns="/var/run/netns/cni-f647596b-9b80-7e3c-d270-5d7e5591f034" Apr 30 03:30:10.079336 containerd[1693]: 2025-04-30 03:30:09.983 [INFO][4792] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:30:10.079336 containerd[1693]: 2025-04-30 03:30:09.983 [INFO][4792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:30:10.079336 containerd[1693]: 2025-04-30 03:30:10.060 [INFO][4821] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" HandleID="k8s-pod-network.41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:10.079336 containerd[1693]: 2025-04-30 03:30:10.060 [INFO][4821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:10.079336 containerd[1693]: 2025-04-30 03:30:10.060 [INFO][4821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:10.079336 containerd[1693]: 2025-04-30 03:30:10.072 [WARNING][4821] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" HandleID="k8s-pod-network.41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:10.079336 containerd[1693]: 2025-04-30 03:30:10.072 [INFO][4821] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" HandleID="k8s-pod-network.41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:10.079336 containerd[1693]: 2025-04-30 03:30:10.073 [INFO][4821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:10.079336 containerd[1693]: 2025-04-30 03:30:10.075 [INFO][4792] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:30:10.079929 containerd[1693]: time="2025-04-30T03:30:10.079431687Z" level=info msg="TearDown network for sandbox \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\" successfully" Apr 30 03:30:10.079929 containerd[1693]: time="2025-04-30T03:30:10.079471188Z" level=info msg="StopPodSandbox for \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\" returns successfully" Apr 30 03:30:10.082879 containerd[1693]: time="2025-04-30T03:30:10.082847305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df5fd9db9-8qshg,Uid:8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:30:10.086814 systemd[1]: run-netns-cni\x2df647596b\x2d9b80\x2d7e3c\x2dd270\x2d5d7e5591f034.mount: Deactivated successfully. Apr 30 03:30:10.095766 containerd[1693]: 2025-04-30 03:30:09.969 [INFO][4794] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:30:10.095766 containerd[1693]: 2025-04-30 03:30:09.971 [INFO][4794] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" iface="eth0" netns="/var/run/netns/cni-f5dcf900-0a72-1c3a-0a5f-2b4fc0f45552" Apr 30 03:30:10.095766 containerd[1693]: 2025-04-30 03:30:09.971 [INFO][4794] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" iface="eth0" netns="/var/run/netns/cni-f5dcf900-0a72-1c3a-0a5f-2b4fc0f45552" Apr 30 03:30:10.095766 containerd[1693]: 2025-04-30 03:30:09.972 [INFO][4794] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" iface="eth0" netns="/var/run/netns/cni-f5dcf900-0a72-1c3a-0a5f-2b4fc0f45552" Apr 30 03:30:10.095766 containerd[1693]: 2025-04-30 03:30:09.972 [INFO][4794] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:30:10.095766 containerd[1693]: 2025-04-30 03:30:09.972 [INFO][4794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:30:10.095766 containerd[1693]: 2025-04-30 03:30:10.066 [INFO][4816] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" HandleID="k8s-pod-network.8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:10.095766 containerd[1693]: 2025-04-30 03:30:10.066 [INFO][4816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:10.095766 containerd[1693]: 2025-04-30 03:30:10.074 [INFO][4816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:10.095766 containerd[1693]: 2025-04-30 03:30:10.089 [WARNING][4816] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" HandleID="k8s-pod-network.8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:10.095766 containerd[1693]: 2025-04-30 03:30:10.089 [INFO][4816] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" HandleID="k8s-pod-network.8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:10.095766 containerd[1693]: 2025-04-30 03:30:10.091 [INFO][4816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:10.095766 containerd[1693]: 2025-04-30 03:30:10.093 [INFO][4794] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:30:10.098557 containerd[1693]: time="2025-04-30T03:30:10.096418777Z" level=info msg="TearDown network for sandbox \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\" successfully" Apr 30 03:30:10.098557 containerd[1693]: time="2025-04-30T03:30:10.096446577Z" level=info msg="StopPodSandbox for \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\" returns successfully" Apr 30 03:30:10.099760 containerd[1693]: time="2025-04-30T03:30:10.099452093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2l4hw,Uid:7af10b02-117f-4e7d-ab6d-30d146cf4d03,Namespace:kube-system,Attempt:1,}" Apr 30 03:30:10.100610 systemd[1]: run-netns-cni\x2df5dcf900\x2d0a72\x2d1c3a\x2d0a5f\x2d2b4fc0f45552.mount: Deactivated successfully. Apr 30 03:30:10.401208 systemd-networkd[1484]: cali63c211a892a: Link UP Apr 30 03:30:10.404942 systemd-networkd[1484]: cali63c211a892a: Gained carrier Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.218 [INFO][4833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0 csi-node-driver- calico-system 79a6da92-25f7-40b3-a880-7f6f766b31fd 792 0 2025-04-30 03:29:41 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-a-a5554f61da csi-node-driver-xqthf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali63c211a892a [] []}} ContainerID="06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" Namespace="calico-system" Pod="csi-node-driver-xqthf" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-" Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.219 [INFO][4833] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" Namespace="calico-system" Pod="csi-node-driver-xqthf" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.323 [INFO][4871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" HandleID="k8s-pod-network.06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" Workload="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.348 [INFO][4871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" HandleID="k8s-pod-network.06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" Workload="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-a5554f61da", "pod":"csi-node-driver-xqthf", "timestamp":"2025-04-30 03:30:10.323941472 +0000 UTC"}, Hostname:"ci-4081.3.3-a-a5554f61da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.349 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.349 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.349 [INFO][4871] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-a5554f61da' Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.352 [INFO][4871] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.359 [INFO][4871] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.366 [INFO][4871] ipam/ipam.go 489: Trying affinity for 192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.369 [INFO][4871] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.373 [INFO][4871] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.373 [INFO][4871] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.375 [INFO][4871] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6 Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.383 [INFO][4871] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.392 [INFO][4871] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.131/26] block=192.168.107.128/26 handle="k8s-pod-network.06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.392 [INFO][4871] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.131/26] handle="k8s-pod-network.06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.392 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:10.425768 containerd[1693]: 2025-04-30 03:30:10.393 [INFO][4871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.131/26] IPv6=[] ContainerID="06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" HandleID="k8s-pod-network.06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" Workload="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:10.426678 containerd[1693]: 2025-04-30 03:30:10.396 [INFO][4833] cni-plugin/k8s.go 386: Populated endpoint ContainerID="06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" Namespace="calico-system" Pod="csi-node-driver-xqthf" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79a6da92-25f7-40b3-a880-7f6f766b31fd", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"", Pod:"csi-node-driver-xqthf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63c211a892a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:10.426678 containerd[1693]: 2025-04-30 03:30:10.396 [INFO][4833] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.131/32] ContainerID="06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" Namespace="calico-system" Pod="csi-node-driver-xqthf" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:10.426678 containerd[1693]: 2025-04-30 03:30:10.396 [INFO][4833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63c211a892a ContainerID="06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" Namespace="calico-system" Pod="csi-node-driver-xqthf" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:10.426678 containerd[1693]: 2025-04-30 03:30:10.404 [INFO][4833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" Namespace="calico-system" Pod="csi-node-driver-xqthf" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:10.426678 containerd[1693]: 2025-04-30 03:30:10.405 [INFO][4833] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" Namespace="calico-system" Pod="csi-node-driver-xqthf" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79a6da92-25f7-40b3-a880-7f6f766b31fd", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6", Pod:"csi-node-driver-xqthf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63c211a892a", MAC:"be:7c:c8:de:26:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:10.426678 containerd[1693]: 2025-04-30 03:30:10.422 [INFO][4833] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6" Namespace="calico-system" Pod="csi-node-driver-xqthf" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:10.469774 containerd[1693]: time="2025-04-30T03:30:10.468643433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:10.469774 containerd[1693]: time="2025-04-30T03:30:10.468907334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:10.469774 containerd[1693]: time="2025-04-30T03:30:10.468980934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:10.470004 containerd[1693]: time="2025-04-30T03:30:10.469857639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:10.496131 systemd-networkd[1484]: cali7a32d6772b6: Gained IPv6LL Apr 30 03:30:10.501542 systemd[1]: Started cri-containerd-06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6.scope - libcontainer container 06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6. Apr 30 03:30:10.515435 systemd-networkd[1484]: calibb26bb0fa2b: Link UP Apr 30 03:30:10.516748 systemd-networkd[1484]: calibb26bb0fa2b: Gained carrier Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.254 [INFO][4844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0 coredns-668d6bf9bc- kube-system 7af10b02-117f-4e7d-ab6d-30d146cf4d03 793 0 2025-04-30 03:29:30 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-a5554f61da coredns-668d6bf9bc-2l4hw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibb26bb0fa2b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2l4hw" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-" Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.255 [INFO][4844] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2l4hw" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.335 [INFO][4878] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" HandleID="k8s-pod-network.423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.351 [INFO][4878] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" HandleID="k8s-pod-network.423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305cd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-a5554f61da", "pod":"coredns-668d6bf9bc-2l4hw", "timestamp":"2025-04-30 03:30:10.335792934 +0000 UTC"}, Hostname:"ci-4081.3.3-a-a5554f61da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.351 [INFO][4878] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.393 [INFO][4878] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.393 [INFO][4878] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-a5554f61da' Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.454 [INFO][4878] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.462 [INFO][4878] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.473 [INFO][4878] ipam/ipam.go 489: Trying affinity for 192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.475 [INFO][4878] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.478 [INFO][4878] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.479 [INFO][4878] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.483 [INFO][4878] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.493 [INFO][4878] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.506 [INFO][4878] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.132/26] block=192.168.107.128/26 handle="k8s-pod-network.423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.506 [INFO][4878] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.132/26] handle="k8s-pod-network.423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.506 [INFO][4878] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:10.537091 containerd[1693]: 2025-04-30 03:30:10.506 [INFO][4878] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.132/26] IPv6=[] ContainerID="423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" HandleID="k8s-pod-network.423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:10.538908 containerd[1693]: 2025-04-30 03:30:10.510 [INFO][4844] cni-plugin/k8s.go 386: Populated endpoint ContainerID="423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2l4hw" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7af10b02-117f-4e7d-ab6d-30d146cf4d03", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 30, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"", Pod:"coredns-668d6bf9bc-2l4hw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb26bb0fa2b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:10.538908 containerd[1693]: 2025-04-30 03:30:10.510 [INFO][4844] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.132/32] ContainerID="423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2l4hw" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:10.538908 containerd[1693]: 2025-04-30 03:30:10.510 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb26bb0fa2b ContainerID="423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2l4hw" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:10.538908 containerd[1693]: 2025-04-30 03:30:10.517 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2l4hw" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:10.538908 containerd[1693]: 2025-04-30 03:30:10.517 [INFO][4844] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2l4hw" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7af10b02-117f-4e7d-ab6d-30d146cf4d03", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 30, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d", Pod:"coredns-668d6bf9bc-2l4hw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb26bb0fa2b", MAC:"26:7e:ac:d8:68:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:10.538908 containerd[1693]: 2025-04-30 03:30:10.533 [INFO][4844] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2l4hw" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:10.618125 containerd[1693]: time="2025-04-30T03:30:10.617225413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:10.619680 containerd[1693]: time="2025-04-30T03:30:10.619560726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xqthf,Uid:79a6da92-25f7-40b3-a880-7f6f766b31fd,Namespace:calico-system,Attempt:1,} returns sandbox id \"06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6\"" Apr 30 03:30:10.623803 containerd[1693]: time="2025-04-30T03:30:10.618190718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:10.623803 containerd[1693]: time="2025-04-30T03:30:10.619159824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:10.623803 containerd[1693]: time="2025-04-30T03:30:10.620777532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:10.624199 systemd-networkd[1484]: cali2f4ddbab9ef: Link UP Apr 30 03:30:10.628690 systemd-networkd[1484]: cali2f4ddbab9ef: Gained carrier Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.262 [INFO][4854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0 calico-apiserver-5df5fd9db9- calico-apiserver 8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49 794 0 2025-04-30 03:29:41 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5df5fd9db9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-a5554f61da calico-apiserver-5df5fd9db9-8qshg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2f4ddbab9ef [] []}} ContainerID="8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-8qshg" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-" Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.262 [INFO][4854] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-8qshg" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.352 [INFO][4880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" HandleID="k8s-pod-network.8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.365 [INFO][4880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" HandleID="k8s-pod-network.8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef100), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-a5554f61da", "pod":"calico-apiserver-5df5fd9db9-8qshg", "timestamp":"2025-04-30 03:30:10.351848719 +0000 UTC"}, Hostname:"ci-4081.3.3-a-a5554f61da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.365 [INFO][4880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.506 [INFO][4880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.506 [INFO][4880] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-a5554f61da' Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.555 [INFO][4880] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.567 [INFO][4880] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.576 [INFO][4880] ipam/ipam.go 489: Trying affinity for 192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.580 [INFO][4880] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.584 [INFO][4880] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.585 [INFO][4880] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.587 [INFO][4880] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193 Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.594 [INFO][4880] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.607 [INFO][4880] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.133/26] block=192.168.107.128/26 handle="k8s-pod-network.8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.607 [INFO][4880] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.133/26] handle="k8s-pod-network.8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.608 [INFO][4880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:10.657892 containerd[1693]: 2025-04-30 03:30:10.608 [INFO][4880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.133/26] IPv6=[] ContainerID="8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" HandleID="k8s-pod-network.8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:10.658765 containerd[1693]: 2025-04-30 03:30:10.614 [INFO][4854] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-8qshg" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0", GenerateName:"calico-apiserver-5df5fd9db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df5fd9db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"", Pod:"calico-apiserver-5df5fd9db9-8qshg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f4ddbab9ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:10.658765 containerd[1693]: 2025-04-30 03:30:10.615 [INFO][4854] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.133/32] ContainerID="8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-8qshg" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:10.658765 containerd[1693]: 2025-04-30 03:30:10.615 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f4ddbab9ef ContainerID="8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-8qshg" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:10.658765 containerd[1693]: 2025-04-30 03:30:10.629 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-8qshg" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:10.658765 containerd[1693]: 2025-04-30 03:30:10.632 [INFO][4854] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-8qshg" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0", GenerateName:"calico-apiserver-5df5fd9db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df5fd9db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193", Pod:"calico-apiserver-5df5fd9db9-8qshg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f4ddbab9ef", MAC:"42:b3:82:09:40:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:10.658765 containerd[1693]: 2025-04-30 03:30:10.648 [INFO][4854] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193" Namespace="calico-apiserver" Pod="calico-apiserver-5df5fd9db9-8qshg" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:10.673587 systemd[1]: Started cri-containerd-423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d.scope - libcontainer container 423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d. Apr 30 03:30:10.724482 containerd[1693]: time="2025-04-30T03:30:10.722789268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:10.724482 containerd[1693]: time="2025-04-30T03:30:10.723102070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:10.724482 containerd[1693]: time="2025-04-30T03:30:10.723156570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:10.724482 containerd[1693]: time="2025-04-30T03:30:10.723297271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:10.759548 systemd[1]: Started cri-containerd-8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193.scope - libcontainer container 8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193. Apr 30 03:30:10.766470 containerd[1693]: time="2025-04-30T03:30:10.765578293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2l4hw,Uid:7af10b02-117f-4e7d-ab6d-30d146cf4d03,Namespace:kube-system,Attempt:1,} returns sandbox id \"423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d\"" Apr 30 03:30:10.774826 containerd[1693]: time="2025-04-30T03:30:10.774771841Z" level=info msg="CreateContainer within sandbox \"423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:30:10.811948 containerd[1693]: time="2025-04-30T03:30:10.811906636Z" level=info msg="CreateContainer within sandbox \"423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d200f708d6274a755af4a355d10f3e17e57f8600df1bf8ffa974a0770ef003eb\"" Apr 30 03:30:10.813970 containerd[1693]: time="2025-04-30T03:30:10.813500345Z" level=info msg="StartContainer for \"d200f708d6274a755af4a355d10f3e17e57f8600df1bf8ffa974a0770ef003eb\"" Apr 30 03:30:10.821733 containerd[1693]: time="2025-04-30T03:30:10.821665388Z" level=info msg="StopPodSandbox for \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\"" Apr 30 03:30:10.853103 containerd[1693]: time="2025-04-30T03:30:10.852994752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df5fd9db9-8qshg,Uid:8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193\"" Apr 30 03:30:10.893547 systemd[1]: Started cri-containerd-d200f708d6274a755af4a355d10f3e17e57f8600df1bf8ffa974a0770ef003eb.scope - libcontainer container d200f708d6274a755af4a355d10f3e17e57f8600df1bf8ffa974a0770ef003eb. Apr 30 03:30:10.963343 containerd[1693]: time="2025-04-30T03:30:10.963230332Z" level=info msg="StartContainer for \"d200f708d6274a755af4a355d10f3e17e57f8600df1bf8ffa974a0770ef003eb\" returns successfully" Apr 30 03:30:11.051116 containerd[1693]: 2025-04-30 03:30:10.940 [INFO][5075] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:30:11.051116 containerd[1693]: 2025-04-30 03:30:10.940 [INFO][5075] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" iface="eth0" netns="/var/run/netns/cni-48075b22-8147-bad9-483b-8b4aa801a720" Apr 30 03:30:11.051116 containerd[1693]: 2025-04-30 03:30:10.941 [INFO][5075] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" iface="eth0" netns="/var/run/netns/cni-48075b22-8147-bad9-483b-8b4aa801a720" Apr 30 03:30:11.051116 containerd[1693]: 2025-04-30 03:30:10.943 [INFO][5075] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" iface="eth0" netns="/var/run/netns/cni-48075b22-8147-bad9-483b-8b4aa801a720" Apr 30 03:30:11.051116 containerd[1693]: 2025-04-30 03:30:10.943 [INFO][5075] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:30:11.051116 containerd[1693]: 2025-04-30 03:30:10.943 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:30:11.051116 containerd[1693]: 2025-04-30 03:30:11.028 [INFO][5105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" HandleID="k8s-pod-network.4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:11.051116 containerd[1693]: 2025-04-30 03:30:11.028 [INFO][5105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:11.051116 containerd[1693]: 2025-04-30 03:30:11.028 [INFO][5105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:11.051116 containerd[1693]: 2025-04-30 03:30:11.037 [WARNING][5105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" HandleID="k8s-pod-network.4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:11.051116 containerd[1693]: 2025-04-30 03:30:11.037 [INFO][5105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" HandleID="k8s-pod-network.4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:11.051116 containerd[1693]: 2025-04-30 03:30:11.039 [INFO][5105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:11.051116 containerd[1693]: 2025-04-30 03:30:11.044 [INFO][5075] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:30:11.052791 containerd[1693]: time="2025-04-30T03:30:11.051240194Z" level=info msg="TearDown network for sandbox \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\" successfully" Apr 30 03:30:11.052791 containerd[1693]: time="2025-04-30T03:30:11.051270594Z" level=info msg="StopPodSandbox for \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\" returns successfully" Apr 30 03:30:11.055106 containerd[1693]: time="2025-04-30T03:30:11.054643812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfl4g,Uid:ac7b6b9e-a78e-4c10-8774-981b5e31a478,Namespace:kube-system,Attempt:1,}" Apr 30 03:30:11.063966 systemd[1]: run-netns-cni\x2d48075b22\x2d8147\x2dbad9\x2d483b\x2d8b4aa801a720.mount: Deactivated successfully. Apr 30 03:30:11.134787 kubelet[3184]: I0430 03:30:11.132226 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2l4hw" podStartSLOduration=41.132203619 podStartE2EDuration="41.132203619s" podCreationTimestamp="2025-04-30 03:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:30:11.094406221 +0000 UTC m=+46.357864675" watchObservedRunningTime="2025-04-30 03:30:11.132203619 +0000 UTC m=+46.395661973" Apr 30 03:30:11.310664 systemd-networkd[1484]: calia500ff76a88: Link UP Apr 30 03:30:11.311615 systemd-networkd[1484]: calia500ff76a88: Gained carrier Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.204 [INFO][5121] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0 coredns-668d6bf9bc- kube-system ac7b6b9e-a78e-4c10-8774-981b5e31a478 811 0 2025-04-30 03:29:30 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-a5554f61da coredns-668d6bf9bc-xfl4g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia500ff76a88 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfl4g" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-" Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.205 [INFO][5121] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfl4g" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.251 [INFO][5136] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" HandleID="k8s-pod-network.96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.263 [INFO][5136] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" HandleID="k8s-pod-network.96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d6ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-a5554f61da", "pod":"coredns-668d6bf9bc-xfl4g", "timestamp":"2025-04-30 03:30:11.251800848 +0000 UTC"}, Hostname:"ci-4081.3.3-a-a5554f61da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.263 [INFO][5136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.263 [INFO][5136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.263 [INFO][5136] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-a5554f61da' Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.265 [INFO][5136] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.271 [INFO][5136] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.276 [INFO][5136] ipam/ipam.go 489: Trying affinity for 192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.278 [INFO][5136] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.281 [INFO][5136] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.282 [INFO][5136] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.284 [INFO][5136] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3 Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.294 [INFO][5136] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.304 [INFO][5136] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.134/26] block=192.168.107.128/26 handle="k8s-pod-network.96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.304 [INFO][5136] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.134/26] handle="k8s-pod-network.96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" host="ci-4081.3.3-a-a5554f61da" Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.304 [INFO][5136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:11.341845 containerd[1693]: 2025-04-30 03:30:11.304 [INFO][5136] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.134/26] IPv6=[] ContainerID="96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" HandleID="k8s-pod-network.96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:11.343933 containerd[1693]: 2025-04-30 03:30:11.307 [INFO][5121] cni-plugin/k8s.go 386: Populated endpoint ContainerID="96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfl4g" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ac7b6b9e-a78e-4c10-8774-981b5e31a478", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 30, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"", Pod:"coredns-668d6bf9bc-xfl4g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia500ff76a88", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:11.343933 containerd[1693]: 2025-04-30 03:30:11.307 [INFO][5121] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.134/32] ContainerID="96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfl4g" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:11.343933 containerd[1693]: 2025-04-30 03:30:11.307 [INFO][5121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia500ff76a88 ContainerID="96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfl4g" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:11.343933 containerd[1693]: 2025-04-30 03:30:11.311 [INFO][5121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfl4g" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:11.343933 containerd[1693]: 2025-04-30 03:30:11.311 [INFO][5121] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfl4g" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ac7b6b9e-a78e-4c10-8774-981b5e31a478", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 30, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3", Pod:"coredns-668d6bf9bc-xfl4g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia500ff76a88", MAC:"5e:d1:0a:64:65:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:11.343933 containerd[1693]: 2025-04-30 03:30:11.339 [INFO][5121] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfl4g" WorkloadEndpoint="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:11.367742 containerd[1693]: time="2025-04-30T03:30:11.367693957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:11.369658 containerd[1693]: time="2025-04-30T03:30:11.369617267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:30:11.372570 containerd[1693]: time="2025-04-30T03:30:11.372515682Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:11.377583 containerd[1693]: time="2025-04-30T03:30:11.377462608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:11.377710 containerd[1693]: time="2025-04-30T03:30:11.377609209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:11.377872 containerd[1693]: time="2025-04-30T03:30:11.377649609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:11.378054 containerd[1693]: time="2025-04-30T03:30:11.377757410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:11.378606 containerd[1693]: time="2025-04-30T03:30:11.378280413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:11.380317 containerd[1693]: time="2025-04-30T03:30:11.380240623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 4.229508726s" Apr 30 03:30:11.380317 containerd[1693]: time="2025-04-30T03:30:11.380280323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:30:11.385817 containerd[1693]: time="2025-04-30T03:30:11.385778652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:30:11.391748 containerd[1693]: time="2025-04-30T03:30:11.391551682Z" level=info msg="CreateContainer within sandbox \"da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:30:11.406541 systemd[1]: Started cri-containerd-96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3.scope - libcontainer container 96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3. Apr 30 03:30:11.429529 containerd[1693]: time="2025-04-30T03:30:11.429310081Z" level=info msg="CreateContainer within sandbox \"da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0d47d9cf0ce798529b8888fb30b08df65da32b381dbc1c021c774d9907ccf6e8\"" Apr 30 03:30:11.430289 containerd[1693]: time="2025-04-30T03:30:11.430250186Z" level=info msg="StartContainer for \"0d47d9cf0ce798529b8888fb30b08df65da32b381dbc1c021c774d9907ccf6e8\"" Apr 30 03:30:11.468554 systemd[1]: Started cri-containerd-0d47d9cf0ce798529b8888fb30b08df65da32b381dbc1c021c774d9907ccf6e8.scope - libcontainer container 0d47d9cf0ce798529b8888fb30b08df65da32b381dbc1c021c774d9907ccf6e8. Apr 30 03:30:11.476612 containerd[1693]: time="2025-04-30T03:30:11.476565329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfl4g,Uid:ac7b6b9e-a78e-4c10-8774-981b5e31a478,Namespace:kube-system,Attempt:1,} returns sandbox id \"96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3\"" Apr 30 03:30:11.481275 containerd[1693]: time="2025-04-30T03:30:11.481173253Z" level=info msg="CreateContainer within sandbox \"96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:30:11.517449 containerd[1693]: time="2025-04-30T03:30:11.517409944Z" level=info msg="CreateContainer within sandbox \"96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe718482cd0bba7c4d92672c78356aa2eb8e26b04ac8739caf22938963af6cc6\"" Apr 30 03:30:11.520390 containerd[1693]: time="2025-04-30T03:30:11.519254353Z" level=info msg="StartContainer for \"fe718482cd0bba7c4d92672c78356aa2eb8e26b04ac8739caf22938963af6cc6\"" Apr 30 03:30:11.526118 containerd[1693]: time="2025-04-30T03:30:11.526087389Z" level=info msg="StartContainer for \"0d47d9cf0ce798529b8888fb30b08df65da32b381dbc1c021c774d9907ccf6e8\" returns successfully" Apr 30 03:30:11.555188 systemd[1]: Started cri-containerd-fe718482cd0bba7c4d92672c78356aa2eb8e26b04ac8739caf22938963af6cc6.scope - libcontainer container fe718482cd0bba7c4d92672c78356aa2eb8e26b04ac8739caf22938963af6cc6. Apr 30 03:30:11.584299 systemd-networkd[1484]: cali63c211a892a: Gained IPv6LL Apr 30 03:30:11.633178 containerd[1693]: time="2025-04-30T03:30:11.632489648Z" level=info msg="StartContainer for \"fe718482cd0bba7c4d92672c78356aa2eb8e26b04ac8739caf22938963af6cc6\" returns successfully" Apr 30 03:30:11.775512 systemd-networkd[1484]: cali2f4ddbab9ef: Gained IPv6LL Apr 30 03:30:12.105981 kubelet[3184]: I0430 03:30:12.105422 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5df5fd9db9-tlbv5" podStartSLOduration=26.874706767 podStartE2EDuration="31.105401798s" podCreationTimestamp="2025-04-30 03:29:41 +0000 UTC" firstStartedPulling="2025-04-30 03:30:07.150398496 +0000 UTC m=+42.413856850" lastFinishedPulling="2025-04-30 03:30:11.381093527 +0000 UTC m=+46.644551881" observedRunningTime="2025-04-30 03:30:12.104271234 +0000 UTC m=+47.367729688" watchObservedRunningTime="2025-04-30 03:30:12.105401798 +0000 UTC m=+47.368860152" Apr 30 03:30:12.543514 systemd-networkd[1484]: calibb26bb0fa2b: Gained IPv6LL Apr 30 03:30:12.735839 systemd-networkd[1484]: calia500ff76a88: Gained IPv6LL Apr 30 03:30:12.765555 kubelet[3184]: I0430 03:30:12.765306 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xfl4g" podStartSLOduration=42.765283705 podStartE2EDuration="42.765283705s" podCreationTimestamp="2025-04-30 03:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:30:12.119337015 +0000 UTC m=+47.382795369" watchObservedRunningTime="2025-04-30 03:30:12.765283705 +0000 UTC m=+48.028742059" Apr 30 03:30:14.100650 containerd[1693]: time="2025-04-30T03:30:14.100597202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:14.103777 containerd[1693]: time="2025-04-30T03:30:14.103713863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:30:14.107667 containerd[1693]: time="2025-04-30T03:30:14.107601939Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:14.114108 containerd[1693]: time="2025-04-30T03:30:14.114040465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:14.115723 containerd[1693]: time="2025-04-30T03:30:14.115317990Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.729489838s" Apr 30 03:30:14.115723 containerd[1693]: time="2025-04-30T03:30:14.115381292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:30:14.118391 containerd[1693]: time="2025-04-30T03:30:14.117615835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:30:14.144419 containerd[1693]: time="2025-04-30T03:30:14.143136735Z" level=info msg="CreateContainer within sandbox \"cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:30:14.190208 containerd[1693]: time="2025-04-30T03:30:14.190150556Z" level=info msg="CreateContainer within sandbox \"cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0eeed4607c17ef7777b406a8b18c4d18191e7c0ee1620c87d140b795240e7458\"" Apr 30 03:30:14.191193 containerd[1693]: time="2025-04-30T03:30:14.191165276Z" level=info msg="StartContainer for \"0eeed4607c17ef7777b406a8b18c4d18191e7c0ee1620c87d140b795240e7458\"" Apr 30 03:30:14.232832 systemd[1]: Started cri-containerd-0eeed4607c17ef7777b406a8b18c4d18191e7c0ee1620c87d140b795240e7458.scope - libcontainer container 0eeed4607c17ef7777b406a8b18c4d18191e7c0ee1620c87d140b795240e7458. Apr 30 03:30:14.274059 containerd[1693]: time="2025-04-30T03:30:14.274001299Z" level=info msg="StartContainer for \"0eeed4607c17ef7777b406a8b18c4d18191e7c0ee1620c87d140b795240e7458\" returns successfully" Apr 30 03:30:15.156578 systemd[1]: run-containerd-runc-k8s.io-0eeed4607c17ef7777b406a8b18c4d18191e7c0ee1620c87d140b795240e7458-runc.Ens3Kv.mount: Deactivated successfully. Apr 30 03:30:15.214172 kubelet[3184]: I0430 03:30:15.213796 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-89d6c9f55-qzrp4" podStartSLOduration=29.300156077 podStartE2EDuration="34.213680308s" podCreationTimestamp="2025-04-30 03:29:41 +0000 UTC" firstStartedPulling="2025-04-30 03:30:09.202966882 +0000 UTC m=+44.466425236" lastFinishedPulling="2025-04-30 03:30:14.116491013 +0000 UTC m=+49.379949467" observedRunningTime="2025-04-30 03:30:15.142630517 +0000 UTC m=+50.406088971" watchObservedRunningTime="2025-04-30 03:30:15.213680308 +0000 UTC m=+50.477138662" Apr 30 03:30:15.420931 containerd[1693]: time="2025-04-30T03:30:15.420811066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:15.422692 containerd[1693]: time="2025-04-30T03:30:15.422639002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:30:15.426499 containerd[1693]: time="2025-04-30T03:30:15.426439377Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:15.430565 containerd[1693]: time="2025-04-30T03:30:15.430491056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:15.431262 containerd[1693]: time="2025-04-30T03:30:15.431116768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.313460332s" Apr 30 03:30:15.431262 containerd[1693]: time="2025-04-30T03:30:15.431153869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:30:15.433256 containerd[1693]: time="2025-04-30T03:30:15.432678399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:30:15.434260 containerd[1693]: time="2025-04-30T03:30:15.434208329Z" level=info msg="CreateContainer within sandbox \"06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:30:15.478285 containerd[1693]: time="2025-04-30T03:30:15.478246592Z" level=info msg="CreateContainer within sandbox \"06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1d3ecc5da8ef1616172c7ebc26170221f9a6dc56f6b1e0eec88538cb596a525d\"" Apr 30 03:30:15.479406 containerd[1693]: time="2025-04-30T03:30:15.478789902Z" level=info msg="StartContainer for \"1d3ecc5da8ef1616172c7ebc26170221f9a6dc56f6b1e0eec88538cb596a525d\"" Apr 30 03:30:15.507943 systemd[1]: Started cri-containerd-1d3ecc5da8ef1616172c7ebc26170221f9a6dc56f6b1e0eec88538cb596a525d.scope - libcontainer container 1d3ecc5da8ef1616172c7ebc26170221f9a6dc56f6b1e0eec88538cb596a525d. Apr 30 03:30:15.535038 containerd[1693]: time="2025-04-30T03:30:15.534988503Z" level=info msg="StartContainer for \"1d3ecc5da8ef1616172c7ebc26170221f9a6dc56f6b1e0eec88538cb596a525d\" returns successfully" Apr 30 03:30:15.789491 containerd[1693]: time="2025-04-30T03:30:15.789358687Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:15.791508 containerd[1693]: time="2025-04-30T03:30:15.791416927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:30:15.793381 containerd[1693]: time="2025-04-30T03:30:15.793333764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 360.617165ms" Apr 30 03:30:15.793511 containerd[1693]: time="2025-04-30T03:30:15.793384765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:30:15.794918 containerd[1693]: time="2025-04-30T03:30:15.794571789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:30:15.796095 containerd[1693]: time="2025-04-30T03:30:15.795953216Z" level=info msg="CreateContainer within sandbox \"8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:30:15.846312 containerd[1693]: time="2025-04-30T03:30:15.846276202Z" level=info msg="CreateContainer within sandbox \"8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d33cb1ec5ac4b6e853cd6f2eea3a38fc19cfd7084ba2294ade97246dc429dfe3\"" Apr 30 03:30:15.846762 containerd[1693]: time="2025-04-30T03:30:15.846700310Z" level=info msg="StartContainer for \"d33cb1ec5ac4b6e853cd6f2eea3a38fc19cfd7084ba2294ade97246dc429dfe3\"" Apr 30 03:30:15.876811 systemd[1]: Started cri-containerd-d33cb1ec5ac4b6e853cd6f2eea3a38fc19cfd7084ba2294ade97246dc429dfe3.scope - libcontainer container d33cb1ec5ac4b6e853cd6f2eea3a38fc19cfd7084ba2294ade97246dc429dfe3. Apr 30 03:30:15.922422 containerd[1693]: time="2025-04-30T03:30:15.922347492Z" level=info msg="StartContainer for \"d33cb1ec5ac4b6e853cd6f2eea3a38fc19cfd7084ba2294ade97246dc429dfe3\" returns successfully" Apr 30 03:30:16.152872 kubelet[3184]: I0430 03:30:16.152744 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5df5fd9db9-8qshg" podStartSLOduration=30.214580793 podStartE2EDuration="35.152725305s" podCreationTimestamp="2025-04-30 03:29:41 +0000 UTC" firstStartedPulling="2025-04-30 03:30:10.855959968 +0000 UTC m=+46.119418422" lastFinishedPulling="2025-04-30 03:30:15.79410458 +0000 UTC m=+51.057562934" observedRunningTime="2025-04-30 03:30:16.151645584 +0000 UTC m=+51.415103938" watchObservedRunningTime="2025-04-30 03:30:16.152725305 +0000 UTC m=+51.416183659" Apr 30 03:30:17.326080 containerd[1693]: time="2025-04-30T03:30:17.326006491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:17.328063 containerd[1693]: time="2025-04-30T03:30:17.328013330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:30:17.331165 containerd[1693]: time="2025-04-30T03:30:17.330991189Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:17.336101 containerd[1693]: time="2025-04-30T03:30:17.336050888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:17.336782 containerd[1693]: time="2025-04-30T03:30:17.336749502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.542140712s" Apr 30 03:30:17.336988 containerd[1693]: time="2025-04-30T03:30:17.336886404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:30:17.339230 containerd[1693]: time="2025-04-30T03:30:17.339196850Z" level=info msg="CreateContainer within sandbox \"06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:30:17.372197 containerd[1693]: time="2025-04-30T03:30:17.372107194Z" level=info msg="CreateContainer within sandbox \"06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d862cc1fc254bde7711f5090ddb900c521ddac984c36688ff43eb2c79fbd00d1\"" Apr 30 03:30:17.373465 containerd[1693]: time="2025-04-30T03:30:17.373419620Z" level=info msg="StartContainer for \"d862cc1fc254bde7711f5090ddb900c521ddac984c36688ff43eb2c79fbd00d1\"" Apr 30 03:30:17.411497 systemd[1]: Started cri-containerd-d862cc1fc254bde7711f5090ddb900c521ddac984c36688ff43eb2c79fbd00d1.scope - libcontainer container d862cc1fc254bde7711f5090ddb900c521ddac984c36688ff43eb2c79fbd00d1. Apr 30 03:30:17.441900 containerd[1693]: time="2025-04-30T03:30:17.441768859Z" level=info msg="StartContainer for \"d862cc1fc254bde7711f5090ddb900c521ddac984c36688ff43eb2c79fbd00d1\" returns successfully" Apr 30 03:30:17.906378 kubelet[3184]: I0430 03:30:17.906341 3184 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:30:17.906978 kubelet[3184]: I0430 03:30:17.906393 3184 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:30:24.826827 containerd[1693]: time="2025-04-30T03:30:24.826782393Z" level=info msg="StopPodSandbox for \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\"" Apr 30 03:30:24.896696 containerd[1693]: 2025-04-30 03:30:24.860 [WARNING][5499] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0", GenerateName:"calico-apiserver-5df5fd9db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c40e58a4-a506-47e3-a7c8-b9609b315d66", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df5fd9db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2", Pod:"calico-apiserver-5df5fd9db9-tlbv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib7581b2ba80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:24.896696 containerd[1693]: 2025-04-30 03:30:24.860 [INFO][5499] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:30:24.896696 containerd[1693]: 2025-04-30 03:30:24.860 [INFO][5499] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" iface="eth0" netns="" Apr 30 03:30:24.896696 containerd[1693]: 2025-04-30 03:30:24.860 [INFO][5499] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:30:24.896696 containerd[1693]: 2025-04-30 03:30:24.860 [INFO][5499] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:30:24.896696 containerd[1693]: 2025-04-30 03:30:24.883 [INFO][5509] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" HandleID="k8s-pod-network.48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:24.896696 containerd[1693]: 2025-04-30 03:30:24.883 [INFO][5509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:24.896696 containerd[1693]: 2025-04-30 03:30:24.883 [INFO][5509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:24.896696 containerd[1693]: 2025-04-30 03:30:24.891 [WARNING][5509] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" HandleID="k8s-pod-network.48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:24.896696 containerd[1693]: 2025-04-30 03:30:24.891 [INFO][5509] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" HandleID="k8s-pod-network.48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:24.896696 containerd[1693]: 2025-04-30 03:30:24.893 [INFO][5509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:24.896696 containerd[1693]: 2025-04-30 03:30:24.894 [INFO][5499] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:30:24.897499 containerd[1693]: time="2025-04-30T03:30:24.896730666Z" level=info msg="TearDown network for sandbox \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\" successfully" Apr 30 03:30:24.897499 containerd[1693]: time="2025-04-30T03:30:24.896755766Z" level=info msg="StopPodSandbox for \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\" returns successfully" Apr 30 03:30:24.897499 containerd[1693]: time="2025-04-30T03:30:24.897416984Z" level=info msg="RemovePodSandbox for \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\"" Apr 30 03:30:24.897757 containerd[1693]: time="2025-04-30T03:30:24.897733893Z" level=info msg="Forcibly stopping sandbox \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\"" Apr 30 03:30:24.959607 containerd[1693]: 2025-04-30 03:30:24.931 [WARNING][5527] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0", GenerateName:"calico-apiserver-5df5fd9db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c40e58a4-a506-47e3-a7c8-b9609b315d66", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df5fd9db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"da24cef58d2e220cdc7df4180b55b322a74c32e1a3592aa5d022fda3ccca90f2", Pod:"calico-apiserver-5df5fd9db9-tlbv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib7581b2ba80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:24.959607 containerd[1693]: 2025-04-30 03:30:24.931 [INFO][5527] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:30:24.959607 containerd[1693]: 2025-04-30 03:30:24.932 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" iface="eth0" netns="" Apr 30 03:30:24.959607 containerd[1693]: 2025-04-30 03:30:24.932 [INFO][5527] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:30:24.959607 containerd[1693]: 2025-04-30 03:30:24.932 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:30:24.959607 containerd[1693]: 2025-04-30 03:30:24.949 [INFO][5534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" HandleID="k8s-pod-network.48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:24.959607 containerd[1693]: 2025-04-30 03:30:24.949 [INFO][5534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:24.959607 containerd[1693]: 2025-04-30 03:30:24.949 [INFO][5534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:24.959607 containerd[1693]: 2025-04-30 03:30:24.956 [WARNING][5534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" HandleID="k8s-pod-network.48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:24.959607 containerd[1693]: 2025-04-30 03:30:24.956 [INFO][5534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" HandleID="k8s-pod-network.48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--tlbv5-eth0" Apr 30 03:30:24.959607 containerd[1693]: 2025-04-30 03:30:24.957 [INFO][5534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:24.959607 containerd[1693]: 2025-04-30 03:30:24.958 [INFO][5527] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c" Apr 30 03:30:24.959607 containerd[1693]: time="2025-04-30T03:30:24.959413244Z" level=info msg="TearDown network for sandbox \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\" successfully" Apr 30 03:30:24.968490 containerd[1693]: time="2025-04-30T03:30:24.968447886Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:24.968617 containerd[1693]: time="2025-04-30T03:30:24.968530888Z" level=info msg="RemovePodSandbox \"48a0d5ece247322ba11dbcc79b84e9474545df975c4f37672d2758476c3bc35c\" returns successfully" Apr 30 03:30:24.969160 containerd[1693]: time="2025-04-30T03:30:24.969129604Z" level=info msg="StopPodSandbox for \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\"" Apr 30 03:30:25.042247 containerd[1693]: 2025-04-30 03:30:25.000 [WARNING][5552] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ac7b6b9e-a78e-4c10-8774-981b5e31a478", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 30, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3", Pod:"coredns-668d6bf9bc-xfl4g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia500ff76a88", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:25.042247 containerd[1693]: 2025-04-30 03:30:25.000 [INFO][5552] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:30:25.042247 containerd[1693]: 2025-04-30 03:30:25.000 [INFO][5552] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" iface="eth0" netns="" Apr 30 03:30:25.042247 containerd[1693]: 2025-04-30 03:30:25.000 [INFO][5552] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:30:25.042247 containerd[1693]: 2025-04-30 03:30:25.000 [INFO][5552] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:30:25.042247 containerd[1693]: 2025-04-30 03:30:25.033 [INFO][5559] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" HandleID="k8s-pod-network.4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:25.042247 containerd[1693]: 2025-04-30 03:30:25.033 [INFO][5559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:25.042247 containerd[1693]: 2025-04-30 03:30:25.033 [INFO][5559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:25.042247 containerd[1693]: 2025-04-30 03:30:25.039 [WARNING][5559] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" HandleID="k8s-pod-network.4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:25.042247 containerd[1693]: 2025-04-30 03:30:25.039 [INFO][5559] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" HandleID="k8s-pod-network.4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:25.042247 containerd[1693]: 2025-04-30 03:30:25.040 [INFO][5559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:25.042247 containerd[1693]: 2025-04-30 03:30:25.041 [INFO][5552] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:30:25.043032 containerd[1693]: time="2025-04-30T03:30:25.042289763Z" level=info msg="TearDown network for sandbox \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\" successfully" Apr 30 03:30:25.043032 containerd[1693]: time="2025-04-30T03:30:25.042316364Z" level=info msg="StopPodSandbox for \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\" returns successfully" Apr 30 03:30:25.043032 containerd[1693]: time="2025-04-30T03:30:25.042756775Z" level=info msg="RemovePodSandbox for \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\"" Apr 30 03:30:25.043032 containerd[1693]: time="2025-04-30T03:30:25.042791876Z" level=info msg="Forcibly stopping sandbox \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\"" Apr 30 03:30:25.102718 containerd[1693]: 2025-04-30 03:30:25.074 [WARNING][5577] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ac7b6b9e-a78e-4c10-8774-981b5e31a478", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 30, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"96cf37dba09cdc7d68dfafecf43fb2429a7b34f5d970ab5e7f952bf5a42360d3", Pod:"coredns-668d6bf9bc-xfl4g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia500ff76a88", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:25.102718 containerd[1693]: 2025-04-30 03:30:25.075 [INFO][5577] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:30:25.102718 containerd[1693]: 2025-04-30 03:30:25.075 [INFO][5577] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" iface="eth0" netns="" Apr 30 03:30:25.102718 containerd[1693]: 2025-04-30 03:30:25.075 [INFO][5577] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:30:25.102718 containerd[1693]: 2025-04-30 03:30:25.075 [INFO][5577] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:30:25.102718 containerd[1693]: 2025-04-30 03:30:25.092 [INFO][5584] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" HandleID="k8s-pod-network.4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:25.102718 containerd[1693]: 2025-04-30 03:30:25.092 [INFO][5584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:25.102718 containerd[1693]: 2025-04-30 03:30:25.092 [INFO][5584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:25.102718 containerd[1693]: 2025-04-30 03:30:25.099 [WARNING][5584] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" HandleID="k8s-pod-network.4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:25.102718 containerd[1693]: 2025-04-30 03:30:25.099 [INFO][5584] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" HandleID="k8s-pod-network.4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--xfl4g-eth0" Apr 30 03:30:25.102718 containerd[1693]: 2025-04-30 03:30:25.100 [INFO][5584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:25.102718 containerd[1693]: 2025-04-30 03:30:25.101 [INFO][5577] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f" Apr 30 03:30:25.102718 containerd[1693]: time="2025-04-30T03:30:25.102660879Z" level=info msg="TearDown network for sandbox \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\" successfully" Apr 30 03:30:25.110248 containerd[1693]: time="2025-04-30T03:30:25.110127379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:25.110703 containerd[1693]: time="2025-04-30T03:30:25.110312484Z" level=info msg="RemovePodSandbox \"4d5d88323a9ddd4e42979d4ddb521faf123243ccebd673b55337d1f2cba09f9f\" returns successfully" Apr 30 03:30:25.111235 containerd[1693]: time="2025-04-30T03:30:25.111208408Z" level=info msg="StopPodSandbox for \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\"" Apr 30 03:30:25.172583 containerd[1693]: 2025-04-30 03:30:25.143 [WARNING][5602] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0", GenerateName:"calico-kube-controllers-89d6c9f55-", Namespace:"calico-system", SelfLink:"", UID:"d053a264-e44d-4450-bd67-987ac2ab6edc", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"89d6c9f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76", Pod:"calico-kube-controllers-89d6c9f55-qzrp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7a32d6772b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:25.172583 containerd[1693]: 2025-04-30 03:30:25.143 [INFO][5602] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:30:25.172583 containerd[1693]: 2025-04-30 03:30:25.143 [INFO][5602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" iface="eth0" netns="" Apr 30 03:30:25.172583 containerd[1693]: 2025-04-30 03:30:25.143 [INFO][5602] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:30:25.172583 containerd[1693]: 2025-04-30 03:30:25.143 [INFO][5602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:30:25.172583 containerd[1693]: 2025-04-30 03:30:25.163 [INFO][5609] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" HandleID="k8s-pod-network.60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:25.172583 containerd[1693]: 2025-04-30 03:30:25.164 [INFO][5609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:25.172583 containerd[1693]: 2025-04-30 03:30:25.164 [INFO][5609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:25.172583 containerd[1693]: 2025-04-30 03:30:25.169 [WARNING][5609] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" HandleID="k8s-pod-network.60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:25.172583 containerd[1693]: 2025-04-30 03:30:25.169 [INFO][5609] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" HandleID="k8s-pod-network.60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:25.172583 containerd[1693]: 2025-04-30 03:30:25.170 [INFO][5609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:25.172583 containerd[1693]: 2025-04-30 03:30:25.171 [INFO][5602] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:30:25.173499 containerd[1693]: time="2025-04-30T03:30:25.172606952Z" level=info msg="TearDown network for sandbox \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\" successfully" Apr 30 03:30:25.173499 containerd[1693]: time="2025-04-30T03:30:25.172634253Z" level=info msg="StopPodSandbox for \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\" returns successfully" Apr 30 03:30:25.173499 containerd[1693]: time="2025-04-30T03:30:25.173042664Z" level=info msg="RemovePodSandbox for \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\"" Apr 30 03:30:25.173499 containerd[1693]: time="2025-04-30T03:30:25.173075465Z" level=info msg="Forcibly stopping sandbox \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\"" Apr 30 03:30:25.231959 containerd[1693]: 2025-04-30 03:30:25.203 [WARNING][5627] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0", GenerateName:"calico-kube-controllers-89d6c9f55-", Namespace:"calico-system", SelfLink:"", UID:"d053a264-e44d-4450-bd67-987ac2ab6edc", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"89d6c9f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"cfab528de7a9710c7430bb6f76efe20277bad711cbb6b14736f758312510ad76", Pod:"calico-kube-controllers-89d6c9f55-qzrp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7a32d6772b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:25.231959 containerd[1693]: 2025-04-30 03:30:25.203 [INFO][5627] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:30:25.231959 containerd[1693]: 2025-04-30 03:30:25.203 [INFO][5627] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" iface="eth0" netns="" Apr 30 03:30:25.231959 containerd[1693]: 2025-04-30 03:30:25.203 [INFO][5627] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:30:25.231959 containerd[1693]: 2025-04-30 03:30:25.203 [INFO][5627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:30:25.231959 containerd[1693]: 2025-04-30 03:30:25.223 [INFO][5634] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" HandleID="k8s-pod-network.60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:25.231959 containerd[1693]: 2025-04-30 03:30:25.223 [INFO][5634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:25.231959 containerd[1693]: 2025-04-30 03:30:25.223 [INFO][5634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:25.231959 containerd[1693]: 2025-04-30 03:30:25.228 [WARNING][5634] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" HandleID="k8s-pod-network.60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:25.231959 containerd[1693]: 2025-04-30 03:30:25.228 [INFO][5634] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" HandleID="k8s-pod-network.60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--kube--controllers--89d6c9f55--qzrp4-eth0" Apr 30 03:30:25.231959 containerd[1693]: 2025-04-30 03:30:25.230 [INFO][5634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:25.231959 containerd[1693]: 2025-04-30 03:30:25.230 [INFO][5627] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a" Apr 30 03:30:25.232670 containerd[1693]: time="2025-04-30T03:30:25.231979542Z" level=info msg="TearDown network for sandbox \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\" successfully" Apr 30 03:30:25.240589 containerd[1693]: time="2025-04-30T03:30:25.240473369Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:25.240810 containerd[1693]: time="2025-04-30T03:30:25.240631773Z" level=info msg="RemovePodSandbox \"60a0f3550034fb90d345e305247f7e13dffde02b6107c74a8f138bfb86f7317a\" returns successfully" Apr 30 03:30:25.241262 containerd[1693]: time="2025-04-30T03:30:25.241165888Z" level=info msg="StopPodSandbox for \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\"" Apr 30 03:30:25.300947 containerd[1693]: 2025-04-30 03:30:25.273 [WARNING][5652] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7af10b02-117f-4e7d-ab6d-30d146cf4d03", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 30, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d", Pod:"coredns-668d6bf9bc-2l4hw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb26bb0fa2b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:25.300947 containerd[1693]: 2025-04-30 03:30:25.273 [INFO][5652] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:30:25.300947 containerd[1693]: 2025-04-30 03:30:25.273 [INFO][5652] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" iface="eth0" netns="" Apr 30 03:30:25.300947 containerd[1693]: 2025-04-30 03:30:25.273 [INFO][5652] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:30:25.300947 containerd[1693]: 2025-04-30 03:30:25.273 [INFO][5652] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:30:25.300947 containerd[1693]: 2025-04-30 03:30:25.291 [INFO][5660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" HandleID="k8s-pod-network.8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:25.300947 containerd[1693]: 2025-04-30 03:30:25.291 [INFO][5660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:25.300947 containerd[1693]: 2025-04-30 03:30:25.291 [INFO][5660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:25.300947 containerd[1693]: 2025-04-30 03:30:25.297 [WARNING][5660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" HandleID="k8s-pod-network.8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:25.300947 containerd[1693]: 2025-04-30 03:30:25.297 [INFO][5660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" HandleID="k8s-pod-network.8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:25.300947 containerd[1693]: 2025-04-30 03:30:25.299 [INFO][5660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:25.300947 containerd[1693]: 2025-04-30 03:30:25.299 [INFO][5652] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:30:25.301889 containerd[1693]: time="2025-04-30T03:30:25.300955588Z" level=info msg="TearDown network for sandbox \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\" successfully" Apr 30 03:30:25.301889 containerd[1693]: time="2025-04-30T03:30:25.301022990Z" level=info msg="StopPodSandbox for \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\" returns successfully" Apr 30 03:30:25.301889 containerd[1693]: time="2025-04-30T03:30:25.301743309Z" level=info msg="RemovePodSandbox for \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\"" Apr 30 03:30:25.301889 containerd[1693]: time="2025-04-30T03:30:25.301775810Z" level=info msg="Forcibly stopping sandbox \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\"" Apr 30 03:30:25.365107 containerd[1693]: 2025-04-30 03:30:25.336 [WARNING][5678] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7af10b02-117f-4e7d-ab6d-30d146cf4d03", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 30, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"423efd5ae105231bf0017f61cb43ac1981cfdc6002a65dce626c9456ecb5477d", Pod:"coredns-668d6bf9bc-2l4hw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb26bb0fa2b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:25.365107 containerd[1693]: 2025-04-30 03:30:25.336 [INFO][5678] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:30:25.365107 containerd[1693]: 2025-04-30 03:30:25.336 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" iface="eth0" netns="" Apr 30 03:30:25.365107 containerd[1693]: 2025-04-30 03:30:25.336 [INFO][5678] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:30:25.365107 containerd[1693]: 2025-04-30 03:30:25.336 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:30:25.365107 containerd[1693]: 2025-04-30 03:30:25.354 [INFO][5686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" HandleID="k8s-pod-network.8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:25.365107 containerd[1693]: 2025-04-30 03:30:25.354 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:25.365107 containerd[1693]: 2025-04-30 03:30:25.354 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:25.365107 containerd[1693]: 2025-04-30 03:30:25.361 [WARNING][5686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" HandleID="k8s-pod-network.8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:25.365107 containerd[1693]: 2025-04-30 03:30:25.361 [INFO][5686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" HandleID="k8s-pod-network.8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Workload="ci--4081.3.3--a--a5554f61da-k8s-coredns--668d6bf9bc--2l4hw-eth0" Apr 30 03:30:25.365107 containerd[1693]: 2025-04-30 03:30:25.363 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:25.365107 containerd[1693]: 2025-04-30 03:30:25.363 [INFO][5678] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637" Apr 30 03:30:25.365753 containerd[1693]: time="2025-04-30T03:30:25.365092806Z" level=info msg="TearDown network for sandbox \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\" successfully" Apr 30 03:30:25.375728 containerd[1693]: time="2025-04-30T03:30:25.375678789Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:25.375892 containerd[1693]: time="2025-04-30T03:30:25.375754191Z" level=info msg="RemovePodSandbox \"8565d790a0167dd545e103f743508104fa843e6d9d7b59358ce540d5f84c8637\" returns successfully" Apr 30 03:30:25.376260 containerd[1693]: time="2025-04-30T03:30:25.376214903Z" level=info msg="StopPodSandbox for \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\"" Apr 30 03:30:25.433592 containerd[1693]: 2025-04-30 03:30:25.406 [WARNING][5704] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0", GenerateName:"calico-apiserver-5df5fd9db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df5fd9db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193", Pod:"calico-apiserver-5df5fd9db9-8qshg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f4ddbab9ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:25.433592 containerd[1693]: 2025-04-30 03:30:25.406 [INFO][5704] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:30:25.433592 containerd[1693]: 2025-04-30 03:30:25.406 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" iface="eth0" netns="" Apr 30 03:30:25.433592 containerd[1693]: 2025-04-30 03:30:25.406 [INFO][5704] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:30:25.433592 containerd[1693]: 2025-04-30 03:30:25.406 [INFO][5704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:30:25.433592 containerd[1693]: 2025-04-30 03:30:25.423 [INFO][5711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" HandleID="k8s-pod-network.41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:25.433592 containerd[1693]: 2025-04-30 03:30:25.423 [INFO][5711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:25.433592 containerd[1693]: 2025-04-30 03:30:25.423 [INFO][5711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:25.433592 containerd[1693]: 2025-04-30 03:30:25.430 [WARNING][5711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" HandleID="k8s-pod-network.41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:25.433592 containerd[1693]: 2025-04-30 03:30:25.430 [INFO][5711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" HandleID="k8s-pod-network.41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:25.433592 containerd[1693]: 2025-04-30 03:30:25.431 [INFO][5711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:25.433592 containerd[1693]: 2025-04-30 03:30:25.432 [INFO][5704] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:30:25.434220 containerd[1693]: time="2025-04-30T03:30:25.433627241Z" level=info msg="TearDown network for sandbox \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\" successfully" Apr 30 03:30:25.434220 containerd[1693]: time="2025-04-30T03:30:25.433657241Z" level=info msg="StopPodSandbox for \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\" returns successfully" Apr 30 03:30:25.434220 containerd[1693]: time="2025-04-30T03:30:25.434194056Z" level=info msg="RemovePodSandbox for \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\"" Apr 30 03:30:25.434344 containerd[1693]: time="2025-04-30T03:30:25.434224556Z" level=info msg="Forcibly stopping sandbox \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\"" Apr 30 03:30:25.493392 containerd[1693]: 2025-04-30 03:30:25.466 [WARNING][5729] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0", GenerateName:"calico-apiserver-5df5fd9db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a2b98b3-63eb-4ce1-b8d8-aa02372a6b49", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df5fd9db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"8882ac9873790f071acf801896598a3c86b2a2f215908fe12c1d5a9fb1eaf193", Pod:"calico-apiserver-5df5fd9db9-8qshg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f4ddbab9ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:25.493392 containerd[1693]: 2025-04-30 03:30:25.466 [INFO][5729] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:30:25.493392 containerd[1693]: 2025-04-30 03:30:25.466 [INFO][5729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" iface="eth0" netns="" Apr 30 03:30:25.493392 containerd[1693]: 2025-04-30 03:30:25.466 [INFO][5729] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:30:25.493392 containerd[1693]: 2025-04-30 03:30:25.466 [INFO][5729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:30:25.493392 containerd[1693]: 2025-04-30 03:30:25.483 [INFO][5736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" HandleID="k8s-pod-network.41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:25.493392 containerd[1693]: 2025-04-30 03:30:25.483 [INFO][5736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:25.493392 containerd[1693]: 2025-04-30 03:30:25.483 [INFO][5736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:25.493392 containerd[1693]: 2025-04-30 03:30:25.489 [WARNING][5736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" HandleID="k8s-pod-network.41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:25.493392 containerd[1693]: 2025-04-30 03:30:25.489 [INFO][5736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" HandleID="k8s-pod-network.41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Workload="ci--4081.3.3--a--a5554f61da-k8s-calico--apiserver--5df5fd9db9--8qshg-eth0" Apr 30 03:30:25.493392 containerd[1693]: 2025-04-30 03:30:25.491 [INFO][5736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:25.493392 containerd[1693]: 2025-04-30 03:30:25.492 [INFO][5729] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91" Apr 30 03:30:25.494028 containerd[1693]: time="2025-04-30T03:30:25.493441542Z" level=info msg="TearDown network for sandbox \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\" successfully" Apr 30 03:30:25.504132 containerd[1693]: time="2025-04-30T03:30:25.504092727Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:25.504245 containerd[1693]: time="2025-04-30T03:30:25.504155529Z" level=info msg="RemovePodSandbox \"41ef8d10f40343773a1bcde91d229a269f6b26ec8994a5116d1aea0e0a971b91\" returns successfully" Apr 30 03:30:25.504676 containerd[1693]: time="2025-04-30T03:30:25.504647142Z" level=info msg="StopPodSandbox for \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\"" Apr 30 03:30:25.563536 containerd[1693]: 2025-04-30 03:30:25.536 [WARNING][5754] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79a6da92-25f7-40b3-a880-7f6f766b31fd", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6", Pod:"csi-node-driver-xqthf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63c211a892a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:25.563536 containerd[1693]: 2025-04-30 03:30:25.536 [INFO][5754] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:30:25.563536 containerd[1693]: 2025-04-30 03:30:25.536 [INFO][5754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" iface="eth0" netns="" Apr 30 03:30:25.563536 containerd[1693]: 2025-04-30 03:30:25.536 [INFO][5754] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:30:25.563536 containerd[1693]: 2025-04-30 03:30:25.536 [INFO][5754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:30:25.563536 containerd[1693]: 2025-04-30 03:30:25.554 [INFO][5761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" HandleID="k8s-pod-network.0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Workload="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:25.563536 containerd[1693]: 2025-04-30 03:30:25.554 [INFO][5761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:25.563536 containerd[1693]: 2025-04-30 03:30:25.554 [INFO][5761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:25.563536 containerd[1693]: 2025-04-30 03:30:25.559 [WARNING][5761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" HandleID="k8s-pod-network.0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Workload="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:25.563536 containerd[1693]: 2025-04-30 03:30:25.559 [INFO][5761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" HandleID="k8s-pod-network.0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Workload="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:25.563536 containerd[1693]: 2025-04-30 03:30:25.561 [INFO][5761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:25.563536 containerd[1693]: 2025-04-30 03:30:25.562 [INFO][5754] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:30:25.564267 containerd[1693]: time="2025-04-30T03:30:25.563569520Z" level=info msg="TearDown network for sandbox \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\" successfully" Apr 30 03:30:25.564267 containerd[1693]: time="2025-04-30T03:30:25.563595220Z" level=info msg="StopPodSandbox for \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\" returns successfully" Apr 30 03:30:25.564267 containerd[1693]: time="2025-04-30T03:30:25.564154835Z" level=info msg="RemovePodSandbox for \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\"" Apr 30 03:30:25.564267 containerd[1693]: time="2025-04-30T03:30:25.564185236Z" level=info msg="Forcibly stopping sandbox \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\"" Apr 30 03:30:25.622041 containerd[1693]: 2025-04-30 03:30:25.594 [WARNING][5779] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79a6da92-25f7-40b3-a880-7f6f766b31fd", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-a5554f61da", ContainerID:"06c5999580d950801bee9837c5ed3b0930b723bc8d329815bcb634a01a4e2ff6", Pod:"csi-node-driver-xqthf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63c211a892a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:25.622041 containerd[1693]: 2025-04-30 03:30:25.594 [INFO][5779] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:30:25.622041 containerd[1693]: 2025-04-30 03:30:25.594 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" iface="eth0" netns="" Apr 30 03:30:25.622041 containerd[1693]: 2025-04-30 03:30:25.594 [INFO][5779] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:30:25.622041 containerd[1693]: 2025-04-30 03:30:25.594 [INFO][5779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:30:25.622041 containerd[1693]: 2025-04-30 03:30:25.613 [INFO][5786] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" HandleID="k8s-pod-network.0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Workload="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:25.622041 containerd[1693]: 2025-04-30 03:30:25.613 [INFO][5786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:25.622041 containerd[1693]: 2025-04-30 03:30:25.614 [INFO][5786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:25.622041 containerd[1693]: 2025-04-30 03:30:25.619 [WARNING][5786] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" HandleID="k8s-pod-network.0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Workload="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:25.622041 containerd[1693]: 2025-04-30 03:30:25.619 [INFO][5786] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" HandleID="k8s-pod-network.0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Workload="ci--4081.3.3--a--a5554f61da-k8s-csi--node--driver--xqthf-eth0" Apr 30 03:30:25.622041 containerd[1693]: 2025-04-30 03:30:25.620 [INFO][5786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:25.622041 containerd[1693]: 2025-04-30 03:30:25.621 [INFO][5779] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e" Apr 30 03:30:25.622041 containerd[1693]: time="2025-04-30T03:30:25.622000384Z" level=info msg="TearDown network for sandbox \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\" successfully" Apr 30 03:30:25.637296 containerd[1693]: time="2025-04-30T03:30:25.637247392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:25.637422 containerd[1693]: time="2025-04-30T03:30:25.637314194Z" level=info msg="RemovePodSandbox \"0d4503bfcc5b1066ef875c2acbb345bae8bb48b769f9ffb070579d1cc799745e\" returns successfully" Apr 30 03:30:35.130491 kubelet[3184]: I0430 03:30:35.130231 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xqthf" podStartSLOduration=47.420755727 podStartE2EDuration="54.130184977s" podCreationTimestamp="2025-04-30 03:29:41 +0000 UTC" firstStartedPulling="2025-04-30 03:30:10.628285171 +0000 UTC m=+45.891743525" lastFinishedPulling="2025-04-30 03:30:17.337714321 +0000 UTC m=+52.601172775" observedRunningTime="2025-04-30 03:30:18.156512562 +0000 UTC m=+53.419970916" watchObservedRunningTime="2025-04-30 03:30:35.130184977 +0000 UTC m=+70.393667832" Apr 30 03:30:35.233883 systemd[1]: Started sshd@7-10.200.8.47:22-10.200.16.10:38174.service - OpenSSH per-connection server daemon (10.200.16.10:38174). Apr 30 03:30:35.857912 sshd[5828]: Accepted publickey for core from 10.200.16.10 port 38174 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:30:35.859351 sshd[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:35.863670 systemd-logind[1671]: New session 10 of user core. Apr 30 03:30:35.871531 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:30:37.006147 sshd[5828]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:37.009751 systemd[1]: sshd@7-10.200.8.47:22-10.200.16.10:38174.service: Deactivated successfully. Apr 30 03:30:37.012071 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:30:37.013925 systemd-logind[1671]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:30:37.015145 systemd-logind[1671]: Removed session 10. Apr 30 03:30:42.127630 systemd[1]: Started sshd@8-10.200.8.47:22-10.200.16.10:46074.service - OpenSSH per-connection server daemon (10.200.16.10:46074). Apr 30 03:30:42.761507 sshd[5842]: Accepted publickey for core from 10.200.16.10 port 46074 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:30:42.762159 sshd[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:42.769582 systemd-logind[1671]: New session 11 of user core. Apr 30 03:30:42.777595 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:30:43.778108 sshd[5842]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:43.782266 systemd[1]: sshd@8-10.200.8.47:22-10.200.16.10:46074.service: Deactivated successfully. Apr 30 03:30:43.784875 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:30:43.785595 systemd-logind[1671]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:30:43.786619 systemd-logind[1671]: Removed session 11. Apr 30 03:30:48.889733 systemd[1]: Started sshd@9-10.200.8.47:22-10.200.16.10:46090.service - OpenSSH per-connection server daemon (10.200.16.10:46090). Apr 30 03:30:49.520460 sshd[5880]: Accepted publickey for core from 10.200.16.10 port 46090 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:30:49.522211 sshd[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:49.527665 systemd-logind[1671]: New session 12 of user core. Apr 30 03:30:49.532531 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:30:50.022582 sshd[5880]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:50.026159 systemd-logind[1671]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:30:50.028916 systemd[1]: sshd@9-10.200.8.47:22-10.200.16.10:46090.service: Deactivated successfully. Apr 30 03:30:50.032227 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:30:50.034631 systemd-logind[1671]: Removed session 12. Apr 30 03:30:50.135837 systemd[1]: Started sshd@10-10.200.8.47:22-10.200.16.10:51472.service - OpenSSH per-connection server daemon (10.200.16.10:51472). Apr 30 03:30:50.766792 sshd[5894]: Accepted publickey for core from 10.200.16.10 port 51472 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:30:50.768214 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:50.772777 systemd-logind[1671]: New session 13 of user core. Apr 30 03:30:50.780517 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:30:51.296356 sshd[5894]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:51.300842 systemd[1]: sshd@10-10.200.8.47:22-10.200.16.10:51472.service: Deactivated successfully. Apr 30 03:30:51.302886 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:30:51.303730 systemd-logind[1671]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:30:51.304743 systemd-logind[1671]: Removed session 13. Apr 30 03:30:51.410671 systemd[1]: Started sshd@11-10.200.8.47:22-10.200.16.10:51478.service - OpenSSH per-connection server daemon (10.200.16.10:51478). Apr 30 03:30:52.029144 sshd[5906]: Accepted publickey for core from 10.200.16.10 port 51478 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:30:52.030712 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:52.035191 systemd-logind[1671]: New session 14 of user core. Apr 30 03:30:52.039536 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:30:52.530008 sshd[5906]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:52.533306 systemd[1]: sshd@11-10.200.8.47:22-10.200.16.10:51478.service: Deactivated successfully. Apr 30 03:30:52.535773 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:30:52.537854 systemd-logind[1671]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:30:52.539046 systemd-logind[1671]: Removed session 14. Apr 30 03:30:57.646661 systemd[1]: Started sshd@12-10.200.8.47:22-10.200.16.10:51490.service - OpenSSH per-connection server daemon (10.200.16.10:51490). Apr 30 03:30:58.273387 sshd[5922]: Accepted publickey for core from 10.200.16.10 port 51490 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:30:58.275037 sshd[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:58.278891 systemd-logind[1671]: New session 15 of user core. Apr 30 03:30:58.286506 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:30:58.772245 sshd[5922]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:58.775888 systemd[1]: sshd@12-10.200.8.47:22-10.200.16.10:51490.service: Deactivated successfully. Apr 30 03:30:58.778064 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:30:58.779084 systemd-logind[1671]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:30:58.780112 systemd-logind[1671]: Removed session 15. Apr 30 03:31:03.889663 systemd[1]: Started sshd@13-10.200.8.47:22-10.200.16.10:43622.service - OpenSSH per-connection server daemon (10.200.16.10:43622). Apr 30 03:31:04.512490 sshd[5938]: Accepted publickey for core from 10.200.16.10 port 43622 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:04.514165 sshd[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:04.518418 systemd-logind[1671]: New session 16 of user core. Apr 30 03:31:04.526513 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:31:05.019172 sshd[5938]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:05.022780 systemd[1]: sshd@13-10.200.8.47:22-10.200.16.10:43622.service: Deactivated successfully. Apr 30 03:31:05.025308 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:31:05.027040 systemd-logind[1671]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:31:05.028085 systemd-logind[1671]: Removed session 16. Apr 30 03:31:10.130591 systemd[1]: Started sshd@14-10.200.8.47:22-10.200.16.10:56458.service - OpenSSH per-connection server daemon (10.200.16.10:56458). Apr 30 03:31:10.766892 sshd[5973]: Accepted publickey for core from 10.200.16.10 port 56458 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:10.768311 sshd[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:10.773020 systemd-logind[1671]: New session 17 of user core. Apr 30 03:31:10.778531 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:31:11.266581 sshd[5973]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:11.271587 systemd[1]: sshd@14-10.200.8.47:22-10.200.16.10:56458.service: Deactivated successfully. Apr 30 03:31:11.273601 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:31:11.274991 systemd-logind[1671]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:31:11.276088 systemd-logind[1671]: Removed session 17. Apr 30 03:31:16.377707 systemd[1]: Started sshd@15-10.200.8.47:22-10.200.16.10:56464.service - OpenSSH per-connection server daemon (10.200.16.10:56464). Apr 30 03:31:17.006102 sshd[6022]: Accepted publickey for core from 10.200.16.10 port 56464 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:17.007557 sshd[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:17.012163 systemd-logind[1671]: New session 18 of user core. Apr 30 03:31:17.021511 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:31:17.505258 sshd[6022]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:17.509210 systemd[1]: sshd@15-10.200.8.47:22-10.200.16.10:56464.service: Deactivated successfully. Apr 30 03:31:17.511709 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:31:17.512423 systemd-logind[1671]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:31:17.513391 systemd-logind[1671]: Removed session 18. Apr 30 03:31:17.616127 systemd[1]: Started sshd@16-10.200.8.47:22-10.200.16.10:56474.service - OpenSSH per-connection server daemon (10.200.16.10:56474). Apr 30 03:31:18.244550 sshd[6034]: Accepted publickey for core from 10.200.16.10 port 56474 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:18.246308 sshd[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:18.251953 systemd-logind[1671]: New session 19 of user core. Apr 30 03:31:18.256519 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:31:18.879598 sshd[6034]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:18.882979 systemd-logind[1671]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:31:18.884260 systemd[1]: sshd@16-10.200.8.47:22-10.200.16.10:56474.service: Deactivated successfully. Apr 30 03:31:18.886588 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:31:18.888454 systemd-logind[1671]: Removed session 19. Apr 30 03:31:18.989082 systemd[1]: Started sshd@17-10.200.8.47:22-10.200.16.10:57780.service - OpenSSH per-connection server daemon (10.200.16.10:57780). Apr 30 03:31:19.613343 sshd[6044]: Accepted publickey for core from 10.200.16.10 port 57780 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:19.614962 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:19.619513 systemd-logind[1671]: New session 20 of user core. Apr 30 03:31:19.624881 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:31:20.886791 sshd[6044]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:20.889730 systemd[1]: sshd@17-10.200.8.47:22-10.200.16.10:57780.service: Deactivated successfully. Apr 30 03:31:20.891870 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:31:20.893423 systemd-logind[1671]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:31:20.894562 systemd-logind[1671]: Removed session 20. Apr 30 03:31:21.000659 systemd[1]: Started sshd@18-10.200.8.47:22-10.200.16.10:57786.service - OpenSSH per-connection server daemon (10.200.16.10:57786). Apr 30 03:31:21.624129 sshd[6063]: Accepted publickey for core from 10.200.16.10 port 57786 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:21.625676 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:21.630284 systemd-logind[1671]: New session 21 of user core. Apr 30 03:31:21.638647 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:31:22.223104 sshd[6063]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:22.226020 systemd[1]: sshd@18-10.200.8.47:22-10.200.16.10:57786.service: Deactivated successfully. Apr 30 03:31:22.228206 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:31:22.229757 systemd-logind[1671]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:31:22.230967 systemd-logind[1671]: Removed session 21. Apr 30 03:31:22.335878 systemd[1]: Started sshd@19-10.200.8.47:22-10.200.16.10:57800.service - OpenSSH per-connection server daemon (10.200.16.10:57800). Apr 30 03:31:22.967700 sshd[6074]: Accepted publickey for core from 10.200.16.10 port 57800 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:22.969150 sshd[6074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:22.973719 systemd-logind[1671]: New session 22 of user core. Apr 30 03:31:22.984560 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:31:23.466474 sshd[6074]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:23.469395 systemd[1]: sshd@19-10.200.8.47:22-10.200.16.10:57800.service: Deactivated successfully. Apr 30 03:31:23.471900 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:31:23.473427 systemd-logind[1671]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:31:23.475177 systemd-logind[1671]: Removed session 22. Apr 30 03:31:28.581648 systemd[1]: Started sshd@20-10.200.8.47:22-10.200.16.10:57802.service - OpenSSH per-connection server daemon (10.200.16.10:57802). Apr 30 03:31:29.201417 sshd[6097]: Accepted publickey for core from 10.200.16.10 port 57802 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:29.203074 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:29.208355 systemd-logind[1671]: New session 23 of user core. Apr 30 03:31:29.213522 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:31:29.695188 sshd[6097]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:29.698056 systemd[1]: sshd@20-10.200.8.47:22-10.200.16.10:57802.service: Deactivated successfully. Apr 30 03:31:29.700226 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:31:29.702096 systemd-logind[1671]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:31:29.703079 systemd-logind[1671]: Removed session 23. Apr 30 03:31:34.805445 systemd[1]: Started sshd@21-10.200.8.47:22-10.200.16.10:50400.service - OpenSSH per-connection server daemon (10.200.16.10:50400). Apr 30 03:31:35.068921 systemd[1]: run-containerd-runc-k8s.io-7034f4e008adf430103835a1dec0c29f0b935ec273f04d97933b27a1cb219b90-runc.QeWgTT.mount: Deactivated successfully. Apr 30 03:31:35.429401 sshd[6113]: Accepted publickey for core from 10.200.16.10 port 50400 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:35.431172 sshd[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:35.436096 systemd-logind[1671]: New session 24 of user core. Apr 30 03:31:35.440545 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:31:35.930449 sshd[6113]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:35.934023 systemd[1]: sshd@21-10.200.8.47:22-10.200.16.10:50400.service: Deactivated successfully. Apr 30 03:31:35.936801 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:31:35.940235 systemd-logind[1671]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:31:35.942454 systemd-logind[1671]: Removed session 24. Apr 30 03:31:41.041250 systemd[1]: Started sshd@22-10.200.8.47:22-10.200.16.10:57132.service - OpenSSH per-connection server daemon (10.200.16.10:57132). Apr 30 03:31:41.673009 sshd[6153]: Accepted publickey for core from 10.200.16.10 port 57132 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:41.674714 sshd[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:41.679445 systemd-logind[1671]: New session 25 of user core. Apr 30 03:31:41.682505 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:31:42.170620 sshd[6153]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:42.174090 systemd[1]: sshd@22-10.200.8.47:22-10.200.16.10:57132.service: Deactivated successfully. Apr 30 03:31:42.176606 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:31:42.177999 systemd-logind[1671]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:31:42.179138 systemd-logind[1671]: Removed session 25. Apr 30 03:31:47.286658 systemd[1]: Started sshd@23-10.200.8.47:22-10.200.16.10:57146.service - OpenSSH per-connection server daemon (10.200.16.10:57146). Apr 30 03:31:47.910330 sshd[6197]: Accepted publickey for core from 10.200.16.10 port 57146 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:47.911882 sshd[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:47.915767 systemd-logind[1671]: New session 26 of user core. Apr 30 03:31:47.918520 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:31:48.405171 sshd[6197]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:48.408200 systemd[1]: sshd@23-10.200.8.47:22-10.200.16.10:57146.service: Deactivated successfully. Apr 30 03:31:48.410541 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:31:48.412204 systemd-logind[1671]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:31:48.413304 systemd-logind[1671]: Removed session 26. Apr 30 03:31:53.525669 systemd[1]: Started sshd@24-10.200.8.47:22-10.200.16.10:53952.service - OpenSSH per-connection server daemon (10.200.16.10:53952). Apr 30 03:31:54.147532 sshd[6210]: Accepted publickey for core from 10.200.16.10 port 53952 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:54.149108 sshd[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:54.152973 systemd-logind[1671]: New session 27 of user core. Apr 30 03:31:54.156816 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 03:31:54.649079 sshd[6210]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:54.651893 systemd[1]: sshd@24-10.200.8.47:22-10.200.16.10:53952.service: Deactivated successfully. Apr 30 03:31:54.654034 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 03:31:54.655562 systemd-logind[1671]: Session 27 logged out. Waiting for processes to exit. Apr 30 03:31:54.656813 systemd-logind[1671]: Removed session 27. Apr 30 03:31:59.762712 systemd[1]: Started sshd@25-10.200.8.47:22-10.200.16.10:42110.service - OpenSSH per-connection server daemon (10.200.16.10:42110). Apr 30 03:32:00.392658 sshd[6223]: Accepted publickey for core from 10.200.16.10 port 42110 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:32:00.394113 sshd[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:00.398466 systemd-logind[1671]: New session 28 of user core. Apr 30 03:32:00.403509 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 03:32:00.890871 sshd[6223]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:00.894724 systemd[1]: sshd@25-10.200.8.47:22-10.200.16.10:42110.service: Deactivated successfully. Apr 30 03:32:00.896909 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 03:32:00.897683 systemd-logind[1671]: Session 28 logged out. Waiting for processes to exit. Apr 30 03:32:00.898591 systemd-logind[1671]: Removed session 28.