Apr 30 03:28:00.112710 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:28:00.112744 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:00.112759 kernel: BIOS-provided physical RAM map: Apr 30 03:28:00.112770 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:28:00.112780 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 30 03:28:00.112790 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Apr 30 03:28:00.112802 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Apr 30 03:28:00.112816 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Apr 30 03:28:00.112827 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 30 03:28:00.112838 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 30 03:28:00.112848 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 30 03:28:00.112860 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 30 03:28:00.112871 kernel: printk: bootconsole [earlyser0] enabled Apr 30 03:28:00.112883 kernel: NX (Execute Disable) protection: active Apr 30 03:28:00.112901 kernel: APIC: Static calls initialized Apr 30 03:28:00.112914 kernel: efi: EFI v2.7 by Microsoft Apr 30 03:28:00.112927 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Apr 30 03:28:00.112939 kernel: SMBIOS 3.1.0 present. Apr 30 03:28:00.112953 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Apr 30 03:28:00.112966 kernel: Hypervisor detected: Microsoft Hyper-V Apr 30 03:28:00.112979 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 30 03:28:00.112990 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Apr 30 03:28:00.113002 kernel: Hyper-V: Nested features: 0x1e0101 Apr 30 03:28:00.113013 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 30 03:28:00.113027 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 30 03:28:00.113040 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:28:00.113052 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:28:00.113065 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 30 03:28:00.113075 kernel: tsc: Detected 2593.906 MHz processor Apr 30 03:28:00.113087 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:28:00.113099 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:28:00.113111 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 30 03:28:00.113123 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:28:00.113139 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:28:00.113152 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 30 03:28:00.113166 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 30 03:28:00.113179 kernel: Using GB pages for direct mapping Apr 30 03:28:00.113193 kernel: Secure boot disabled Apr 30 03:28:00.113208 kernel: ACPI: Early table checksum verification disabled Apr 30 03:28:00.113225 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 30 03:28:00.113244 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113261 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113276 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Apr 30 03:28:00.113314 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 30 03:28:00.113328 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113341 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113355 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113373 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113387 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113401 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113415 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113430 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 30 03:28:00.113444 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Apr 30 03:28:00.113458 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 30 03:28:00.113473 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 30 03:28:00.113492 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 30 03:28:00.113506 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 30 03:28:00.113520 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 30 03:28:00.113533 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Apr 30 03:28:00.113546 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 30 03:28:00.113560 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Apr 30 03:28:00.113574 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:28:00.113587 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:28:00.113601 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 30 03:28:00.113618 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 30 03:28:00.113632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 30 03:28:00.113643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 30 03:28:00.113656 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 30 03:28:00.113669 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 30 03:28:00.113682 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 30 03:28:00.113696 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 30 03:28:00.113709 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 30 03:28:00.113723 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 30 03:28:00.113739 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Apr 30 03:28:00.113752 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Apr 30 03:28:00.113766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Apr 30 03:28:00.113779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Apr 30 03:28:00.113792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Apr 30 03:28:00.113806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Apr 30 03:28:00.113820 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 30 03:28:00.113834 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 30 03:28:00.113849 kernel: Zone ranges: Apr 30 03:28:00.113866 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:28:00.113880 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 03:28:00.113894 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:28:00.113908 kernel: Movable zone start for each node Apr 30 03:28:00.113922 kernel: Early memory node ranges Apr 30 03:28:00.113936 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:28:00.113951 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Apr 30 03:28:00.113965 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 30 03:28:00.113979 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:28:00.113997 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 30 03:28:00.114011 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:28:00.114026 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:28:00.114040 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Apr 30 03:28:00.114054 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 30 03:28:00.114069 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 30 03:28:00.114082 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:28:00.114097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:28:00.114111 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:28:00.114128 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 30 03:28:00.114143 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:28:00.114157 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 30 03:28:00.114171 kernel: Booting paravirtualized kernel on Hyper-V Apr 30 03:28:00.114185 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:28:00.114199 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:28:00.114213 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:28:00.114227 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:28:00.114241 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:28:00.114258 kernel: Hyper-V: PV spinlocks enabled Apr 30 03:28:00.114272 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:28:00.114288 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:00.116332 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:28:00.116344 kernel: random: crng init done Apr 30 03:28:00.116352 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 30 03:28:00.116364 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:28:00.116371 kernel: Fallback order for Node 0: 0 Apr 30 03:28:00.116386 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Apr 30 03:28:00.116403 kernel: Policy zone: Normal Apr 30 03:28:00.116414 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:28:00.116424 kernel: software IO TLB: area num 2. Apr 30 03:28:00.116433 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 310124K reserved, 0K cma-reserved) Apr 30 03:28:00.116442 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:28:00.116453 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:28:00.116461 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:28:00.116473 kernel: Dynamic Preempt: voluntary Apr 30 03:28:00.116483 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:28:00.116494 kernel: rcu: RCU event tracing is enabled. Apr 30 03:28:00.116507 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:28:00.116516 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:28:00.116525 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:28:00.116535 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:28:00.116543 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:28:00.116556 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:28:00.116564 kernel: Using NULL legacy PIC Apr 30 03:28:00.116576 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 30 03:28:00.116585 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:28:00.116595 kernel: Console: colour dummy device 80x25 Apr 30 03:28:00.116605 kernel: printk: console [tty1] enabled Apr 30 03:28:00.116615 kernel: printk: console [ttyS0] enabled Apr 30 03:28:00.116625 kernel: printk: bootconsole [earlyser0] disabled Apr 30 03:28:00.116633 kernel: ACPI: Core revision 20230628 Apr 30 03:28:00.116644 kernel: Failed to register legacy timer interrupt Apr 30 03:28:00.116654 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:28:00.116665 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 03:28:00.116673 kernel: Hyper-V: Using IPI hypercalls Apr 30 03:28:00.116684 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 30 03:28:00.116693 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 30 03:28:00.116702 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 30 03:28:00.116713 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 30 03:28:00.116721 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 30 03:28:00.116733 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 30 03:28:00.116743 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Apr 30 03:28:00.116754 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:28:00.116763 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:28:00.116772 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:28:00.116782 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:28:00.116790 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:28:00.116801 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:28:00.116809 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 03:28:00.116821 kernel: RETBleed: Vulnerable Apr 30 03:28:00.116831 kernel: Speculative Store Bypass: Vulnerable Apr 30 03:28:00.116843 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:28:00.116852 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:28:00.116862 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:28:00.116870 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:28:00.116882 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:28:00.116890 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 03:28:00.116901 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 03:28:00.116909 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 03:28:00.116918 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:28:00.116928 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 30 03:28:00.116939 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 30 03:28:00.116949 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 30 03:28:00.116957 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 30 03:28:00.116969 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:28:00.116977 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:28:00.116987 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:28:00.116995 kernel: landlock: Up and running. Apr 30 03:28:00.117003 kernel: SELinux: Initializing. Apr 30 03:28:00.117015 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:00.117022 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:00.117034 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 03:28:00.117042 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:00.117055 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:00.117064 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:00.117076 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 03:28:00.117084 kernel: signal: max sigframe size: 3632 Apr 30 03:28:00.117096 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:28:00.117105 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:28:00.117116 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:28:00.117125 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:28:00.117137 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:28:00.117148 kernel: .... node #0, CPUs: #1 Apr 30 03:28:00.117157 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 30 03:28:00.117166 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:28:00.117177 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:28:00.117185 kernel: smpboot: Max logical packages: 1 Apr 30 03:28:00.117197 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 30 03:28:00.117205 kernel: devtmpfs: initialized Apr 30 03:28:00.117216 kernel: x86/mm: Memory block size: 128MB Apr 30 03:28:00.117227 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 30 03:28:00.117238 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:28:00.117246 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:28:00.117257 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:28:00.117265 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:28:00.117275 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:28:00.117285 kernel: audit: type=2000 audit(1745983678.029:1): state=initialized audit_enabled=0 res=1 Apr 30 03:28:00.117292 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:28:00.117309 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:28:00.117325 kernel: cpuidle: using governor menu Apr 30 03:28:00.117333 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:28:00.117343 kernel: dca service started, version 1.12.1 Apr 30 03:28:00.117352 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Apr 30 03:28:00.117361 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:28:00.117372 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:28:00.117380 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:28:00.117391 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:28:00.117399 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:28:00.117412 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:28:00.117420 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:28:00.117428 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:28:00.117436 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:28:00.117444 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:28:00.117452 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:28:00.117460 kernel: ACPI: Interpreter enabled Apr 30 03:28:00.117467 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:28:00.117475 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:28:00.117485 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:28:00.117493 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 03:28:00.117501 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 30 03:28:00.117509 kernel: iommu: Default domain type: Translated Apr 30 03:28:00.117517 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:28:00.117524 kernel: efivars: Registered efivars operations Apr 30 03:28:00.117532 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:28:00.117541 kernel: PCI: System does not support PCI Apr 30 03:28:00.117551 kernel: vgaarb: loaded Apr 30 03:28:00.117561 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 30 03:28:00.117572 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:28:00.117580 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:28:00.117591 kernel: pnp: PnP ACPI init Apr 30 03:28:00.117600 kernel: pnp: PnP ACPI: found 3 devices Apr 30 03:28:00.117608 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:28:00.117619 kernel: NET: Registered PF_INET protocol family Apr 30 03:28:00.117627 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:28:00.117639 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 30 03:28:00.117649 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:28:00.117661 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:28:00.117669 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 03:28:00.117680 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 30 03:28:00.117689 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:28:00.117700 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:28:00.117708 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:28:00.117720 kernel: NET: Registered PF_XDP protocol family Apr 30 03:28:00.117728 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:28:00.117741 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 03:28:00.117750 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Apr 30 03:28:00.117759 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:28:00.117769 kernel: Initialise system trusted keyrings Apr 30 03:28:00.117777 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 30 03:28:00.117788 kernel: Key type asymmetric registered Apr 30 03:28:00.117796 kernel: Asymmetric key parser 'x509' registered Apr 30 03:28:00.117807 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:28:00.117816 kernel: io scheduler mq-deadline registered Apr 30 03:28:00.117829 kernel: io scheduler kyber registered Apr 30 03:28:00.117837 kernel: io scheduler bfq registered Apr 30 03:28:00.117847 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:28:00.117857 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:28:00.117864 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:28:00.117876 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 03:28:00.117883 kernel: i8042: PNP: No PS/2 controller found. Apr 30 03:28:00.118014 kernel: rtc_cmos 00:02: registered as rtc0 Apr 30 03:28:00.118109 kernel: rtc_cmos 00:02: setting system clock to 2025-04-30T03:27:59 UTC (1745983679) Apr 30 03:28:00.118204 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 30 03:28:00.118219 kernel: intel_pstate: CPU model not supported Apr 30 03:28:00.118227 kernel: efifb: probing for efifb Apr 30 03:28:00.118235 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 03:28:00.118247 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 03:28:00.118267 kernel: efifb: scrolling: redraw Apr 30 03:28:00.118275 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 03:28:00.118287 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:28:00.118307 kernel: fb0: EFI VGA frame buffer device Apr 30 03:28:00.118322 kernel: pstore: Using crash dump compression: deflate Apr 30 03:28:00.118333 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:28:00.118341 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:28:00.118349 kernel: Segment Routing with IPv6 Apr 30 03:28:00.118360 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:28:00.118381 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:28:00.118391 kernel: Key type dns_resolver registered Apr 30 03:28:00.118407 kernel: IPI shorthand broadcast: enabled Apr 30 03:28:00.118427 kernel: sched_clock: Marking stable (993003700, 56840200)->(1311838900, -261995000) Apr 30 03:28:00.118441 kernel: registered taskstats version 1 Apr 30 03:28:00.118449 kernel: Loading compiled-in X.509 certificates Apr 30 03:28:00.118465 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:28:00.118481 kernel: Key type .fscrypt registered Apr 30 03:28:00.118495 kernel: Key type fscrypt-provisioning registered Apr 30 03:28:00.118503 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:28:00.118516 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:28:00.118540 kernel: ima: No architecture policies found Apr 30 03:28:00.118556 kernel: clk: Disabling unused clocks Apr 30 03:28:00.118568 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:28:00.118576 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:28:00.118589 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:28:00.118605 kernel: Run /init as init process Apr 30 03:28:00.118621 kernel: with arguments: Apr 30 03:28:00.118630 kernel: /init Apr 30 03:28:00.118639 kernel: with environment: Apr 30 03:28:00.118657 kernel: HOME=/ Apr 30 03:28:00.118667 kernel: TERM=linux Apr 30 03:28:00.118676 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:28:00.118700 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:00.118714 systemd[1]: Detected virtualization microsoft. Apr 30 03:28:00.118725 systemd[1]: Detected architecture x86-64. Apr 30 03:28:00.118747 systemd[1]: Running in initrd. Apr 30 03:28:00.118763 systemd[1]: No hostname configured, using default hostname. Apr 30 03:28:00.118774 systemd[1]: Hostname set to . Apr 30 03:28:00.118792 systemd[1]: Initializing machine ID from random generator. Apr 30 03:28:00.118811 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:28:00.118825 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:00.118834 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:00.118850 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:28:00.118866 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:00.118877 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:28:00.118893 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:28:00.118913 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:28:00.118926 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:28:00.118934 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:00.118946 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:00.118961 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:00.118978 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:00.118995 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:00.119004 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:00.119015 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:00.119033 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:00.119047 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:28:00.119055 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:28:00.119070 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:00.119086 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:00.119098 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:00.119115 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:00.119128 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:28:00.119136 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:00.119155 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:28:00.119169 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:28:00.119177 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:00.119192 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:00.119228 systemd-journald[176]: Collecting audit messages is disabled. Apr 30 03:28:00.119265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:00.119274 systemd-journald[176]: Journal started Apr 30 03:28:00.119315 systemd-journald[176]: Runtime Journal (/run/log/journal/09abdb7141f345a9aa7fbacd22b3663a) is 8.0M, max 158.8M, 150.8M free. Apr 30 03:28:00.141056 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:00.141477 systemd-modules-load[177]: Inserted module 'overlay' Apr 30 03:28:00.141659 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:00.148718 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:00.155107 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:28:00.159530 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:00.182462 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:28:00.183514 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:00.192516 kernel: Bridge firewalling registered Apr 30 03:28:00.188559 systemd-modules-load[177]: Inserted module 'br_netfilter' Apr 30 03:28:00.194955 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:00.209452 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:00.216403 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:00.219843 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:00.223328 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:00.226723 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:00.242036 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:28:00.247862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:00.260431 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:00.268972 dracut-cmdline[204]: dracut-dracut-053 Apr 30 03:28:00.268972 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:00.290356 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:00.295455 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:00.305573 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:00.344627 systemd-resolved[243]: Positive Trust Anchors: Apr 30 03:28:00.344641 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:00.344698 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:00.369687 systemd-resolved[243]: Defaulting to hostname 'linux'. Apr 30 03:28:00.372964 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:00.373969 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:00.399316 kernel: SCSI subsystem initialized Apr 30 03:28:00.409312 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:28:00.423319 kernel: iscsi: registered transport (tcp) Apr 30 03:28:00.444159 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:28:00.444215 kernel: QLogic iSCSI HBA Driver Apr 30 03:28:00.479252 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:00.487457 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:28:00.516376 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:28:00.516452 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:28:00.520055 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:28:00.560321 kernel: raid6: avx512x4 gen() 18256 MB/s Apr 30 03:28:00.579315 kernel: raid6: avx512x2 gen() 18232 MB/s Apr 30 03:28:00.598312 kernel: raid6: avx512x1 gen() 18313 MB/s Apr 30 03:28:00.616308 kernel: raid6: avx2x4 gen() 18243 MB/s Apr 30 03:28:00.635312 kernel: raid6: avx2x2 gen() 18273 MB/s Apr 30 03:28:00.655497 kernel: raid6: avx2x1 gen() 14103 MB/s Apr 30 03:28:00.655539 kernel: raid6: using algorithm avx512x1 gen() 18313 MB/s Apr 30 03:28:00.676085 kernel: raid6: .... xor() 26885 MB/s, rmw enabled Apr 30 03:28:00.676115 kernel: raid6: using avx512x2 recovery algorithm Apr 30 03:28:00.699320 kernel: xor: automatically using best checksumming function avx Apr 30 03:28:00.845324 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:28:00.855123 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:00.865449 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:00.878601 systemd-udevd[398]: Using default interface naming scheme 'v255'. Apr 30 03:28:00.882983 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:00.904469 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:28:00.919431 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Apr 30 03:28:00.947847 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:00.968567 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:01.010731 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:01.027640 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:28:01.068917 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:01.075722 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:01.079292 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:01.088780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:01.101243 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:28:01.098649 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:28:01.122585 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:01.125484 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:01.132336 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:01.138430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:01.150867 kernel: hv_vmbus: Vmbus version:5.2 Apr 30 03:28:01.150898 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:28:01.138621 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:01.141503 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:01.159316 kernel: AES CTR mode by8 optimization enabled Apr 30 03:28:01.163718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:01.169546 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:01.193268 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 03:28:01.193362 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 03:28:01.203509 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:01.205248 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:01.220465 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:01.233489 kernel: PTP clock support registered Apr 30 03:28:01.244326 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 03:28:01.244384 kernel: hv_vmbus: registering driver hv_utils Apr 30 03:28:01.246370 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 03:28:01.248918 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 03:28:01.251154 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 03:28:01.771283 systemd-resolved[243]: Clock change detected. Flushing caches. Apr 30 03:28:01.783724 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 03:28:01.782801 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:01.801697 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 30 03:28:01.801728 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:28:01.801742 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 03:28:01.801753 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 03:28:01.807100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:01.820006 kernel: scsi host0: storvsc_host_t Apr 30 03:28:01.820315 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 03:28:01.820365 kernel: scsi host1: storvsc_host_t Apr 30 03:28:01.827690 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 03:28:01.831606 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 03:28:01.838765 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 30 03:28:01.838810 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 03:28:01.853204 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:01.868193 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 03:28:01.869922 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:28:01.869951 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 03:28:01.883883 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 03:28:01.899778 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 03:28:01.900000 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:28:01.900171 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 03:28:01.900343 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 03:28:01.900500 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:01.900526 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:28:01.990830 kernel: hv_netvsc 6045bddf-7bf7-6045-bddf-7bf76045bddf eth0: VF slot 1 added Apr 30 03:28:01.999951 kernel: hv_vmbus: registering driver hv_pci Apr 30 03:28:02.005159 kernel: hv_pci 12ddb82a-d48e-4535-895a-0e7528f09d6a: PCI VMBus probing: Using version 0x10004 Apr 30 03:28:02.051411 kernel: hv_pci 12ddb82a-d48e-4535-895a-0e7528f09d6a: PCI host bridge to bus d48e:00 Apr 30 03:28:02.051838 kernel: pci_bus d48e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 30 03:28:02.052027 kernel: pci_bus d48e:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 03:28:02.052180 kernel: pci d48e:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 30 03:28:02.052371 kernel: pci d48e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:28:02.052546 kernel: pci d48e:00:02.0: enabling Extended Tags Apr 30 03:28:02.052742 kernel: pci d48e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d48e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 30 03:28:02.052924 kernel: pci_bus d48e:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 03:28:02.053073 kernel: pci d48e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:28:02.217850 kernel: mlx5_core d48e:00:02.0: enabling device (0000 -> 0002) Apr 30 03:28:02.450243 kernel: mlx5_core d48e:00:02.0: firmware version: 14.30.5000 Apr 30 03:28:02.450470 kernel: hv_netvsc 6045bddf-7bf7-6045-bddf-7bf76045bddf eth0: VF registering: eth1 Apr 30 03:28:02.450654 kernel: mlx5_core d48e:00:02.0 eth1: joined to eth0 Apr 30 03:28:02.450837 kernel: mlx5_core d48e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 03:28:02.352620 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 03:28:02.458611 kernel: mlx5_core d48e:00:02.0 enP54414s1: renamed from eth1 Apr 30 03:28:02.469674 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (454) Apr 30 03:28:02.487441 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 03:28:02.490913 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 03:28:02.501765 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:28:03.371618 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (460) Apr 30 03:28:03.385852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 03:28:03.489437 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 03:28:03.514040 disk-uuid[592]: Warning: The kernel is still using the old partition table. Apr 30 03:28:03.514040 disk-uuid[592]: The new table will be used at the next reboot or after you Apr 30 03:28:03.514040 disk-uuid[592]: run partprobe(8) or kpartx(8) Apr 30 03:28:03.514040 disk-uuid[592]: The operation has completed successfully. Apr 30 03:28:03.691478 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:28:03.691587 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:28:03.701745 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:28:03.707130 sh[688]: Success Apr 30 03:28:03.733660 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:28:03.924807 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:28:03.937703 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:28:03.942711 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:28:03.959406 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:28:03.959465 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:03.963078 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:28:03.965985 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:28:03.968686 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:28:04.212106 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:28:04.217157 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:28:04.229838 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:28:04.234733 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:28:04.260531 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:04.260578 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:04.260618 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:04.278616 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:04.291608 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:04.291885 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:28:04.301738 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:28:04.311800 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:28:04.326949 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:04.338802 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:04.359536 systemd-networkd[872]: lo: Link UP Apr 30 03:28:04.359545 systemd-networkd[872]: lo: Gained carrier Apr 30 03:28:04.361641 systemd-networkd[872]: Enumeration completed Apr 30 03:28:04.361862 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:04.365011 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:04.365014 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:04.366104 systemd[1]: Reached target network.target - Network. Apr 30 03:28:04.429610 kernel: mlx5_core d48e:00:02.0 enP54414s1: Link up Apr 30 03:28:04.466096 kernel: hv_netvsc 6045bddf-7bf7-6045-bddf-7bf76045bddf eth0: Data path switched to VF: enP54414s1 Apr 30 03:28:04.465677 systemd-networkd[872]: enP54414s1: Link UP Apr 30 03:28:04.465798 systemd-networkd[872]: eth0: Link UP Apr 30 03:28:04.465956 systemd-networkd[872]: eth0: Gained carrier Apr 30 03:28:04.465968 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:04.471492 systemd-networkd[872]: enP54414s1: Gained carrier Apr 30 03:28:04.494647 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:28:05.162893 ignition[851]: Ignition 2.19.0 Apr 30 03:28:05.162904 ignition[851]: Stage: fetch-offline Apr 30 03:28:05.164507 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:05.162955 ignition[851]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:05.162966 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:05.163098 ignition[851]: parsed url from cmdline: "" Apr 30 03:28:05.163104 ignition[851]: no config URL provided Apr 30 03:28:05.163112 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:05.163122 ignition[851]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:05.163128 ignition[851]: failed to fetch config: resource requires networking Apr 30 03:28:05.163397 ignition[851]: Ignition finished successfully Apr 30 03:28:05.194767 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:28:05.211437 ignition[881]: Ignition 2.19.0 Apr 30 03:28:05.211448 ignition[881]: Stage: fetch Apr 30 03:28:05.211691 ignition[881]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:05.211711 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:05.211795 ignition[881]: parsed url from cmdline: "" Apr 30 03:28:05.211798 ignition[881]: no config URL provided Apr 30 03:28:05.211804 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:05.211813 ignition[881]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:05.211834 ignition[881]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 03:28:05.299456 ignition[881]: GET result: OK Apr 30 03:28:05.299648 ignition[881]: config has been read from IMDS userdata Apr 30 03:28:05.299678 ignition[881]: parsing config with SHA512: c6294e75d68f69b36ecb8ca3068eef377fef38b4160d3e77dc0626971cd78fdc43d372358f869c80927f4b0279cb8e265982822275d8c2beac7554a2647d6f7e Apr 30 03:28:05.306951 unknown[881]: fetched base config from "system" Apr 30 03:28:05.306983 unknown[881]: fetched base config from "system" Apr 30 03:28:05.309004 ignition[881]: fetch: fetch complete Apr 30 03:28:05.306993 unknown[881]: fetched user config from "azure" Apr 30 03:28:05.309013 ignition[881]: fetch: fetch passed Apr 30 03:28:05.311107 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:28:05.309086 ignition[881]: Ignition finished successfully Apr 30 03:28:05.321701 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:28:05.338420 ignition[888]: Ignition 2.19.0 Apr 30 03:28:05.338430 ignition[888]: Stage: kargs Apr 30 03:28:05.338661 ignition[888]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:05.338675 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:05.339871 ignition[888]: kargs: kargs passed Apr 30 03:28:05.339911 ignition[888]: Ignition finished successfully Apr 30 03:28:05.350000 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:28:05.358761 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:28:05.372647 ignition[894]: Ignition 2.19.0 Apr 30 03:28:05.372657 ignition[894]: Stage: disks Apr 30 03:28:05.372866 ignition[894]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:05.372876 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:05.373708 ignition[894]: disks: disks passed Apr 30 03:28:05.373752 ignition[894]: Ignition finished successfully Apr 30 03:28:05.384064 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:28:05.386631 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:05.391617 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:28:05.394572 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:05.400298 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:05.405771 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:05.415774 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:28:05.473374 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 03:28:05.479238 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:28:05.492101 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:28:05.579623 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:28:05.580190 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:28:05.582905 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:05.620748 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:05.625719 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:28:05.637047 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (913) Apr 30 03:28:05.645783 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:05.645835 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:05.646043 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:28:05.651650 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:05.657904 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:05.657312 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:28:05.657355 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:05.669824 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:05.674391 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:28:05.692760 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:28:05.748809 systemd-networkd[872]: enP54414s1: Gained IPv6LL Apr 30 03:28:05.749184 systemd-networkd[872]: eth0: Gained IPv6LL Apr 30 03:28:06.150030 coreos-metadata[915]: Apr 30 03:28:06.149 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 03:28:06.156338 coreos-metadata[915]: Apr 30 03:28:06.156 INFO Fetch successful Apr 30 03:28:06.156338 coreos-metadata[915]: Apr 30 03:28:06.156 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 03:28:06.178685 coreos-metadata[915]: Apr 30 03:28:06.178 INFO Fetch successful Apr 30 03:28:06.199379 initrd-setup-root[941]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:28:06.203581 coreos-metadata[915]: Apr 30 03:28:06.202 INFO wrote hostname ci-4081.3.3-a-e2728433b6 to /sysroot/etc/hostname Apr 30 03:28:06.203455 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:06.236383 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:28:06.245040 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:28:06.249395 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:28:07.024980 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:07.035711 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:28:07.042694 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:28:07.054219 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:07.054466 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:28:07.086188 ignition[1035]: INFO : Ignition 2.19.0 Apr 30 03:28:07.088820 ignition[1035]: INFO : Stage: mount Apr 30 03:28:07.088820 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:07.088820 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:07.088820 ignition[1035]: INFO : mount: mount passed Apr 30 03:28:07.088820 ignition[1035]: INFO : Ignition finished successfully Apr 30 03:28:07.090701 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:28:07.096355 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:28:07.114698 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:28:07.121288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:07.135611 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1047) Apr 30 03:28:07.139610 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:07.139643 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:07.144551 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:07.149610 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:07.151309 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:07.172223 ignition[1063]: INFO : Ignition 2.19.0 Apr 30 03:28:07.172223 ignition[1063]: INFO : Stage: files Apr 30 03:28:07.178326 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:07.178326 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:07.178326 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:28:07.200251 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:28:07.200251 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:28:07.282177 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:28:07.286749 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:28:07.286749 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:28:07.282681 unknown[1063]: wrote ssh authorized keys file for user: core Apr 30 03:28:07.297579 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:07.305304 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:28:07.443314 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:28:11.069631 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:28:11.614860 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:28:11.943556 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:11.943556 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:28:11.975892 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: files passed Apr 30 03:28:11.984280 ignition[1063]: INFO : Ignition finished successfully Apr 30 03:28:11.978617 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:28:12.008285 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:28:12.020900 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:28:12.024155 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:28:12.024283 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:28:12.038504 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:12.038504 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:12.051416 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:12.040046 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:12.047947 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:28:12.063655 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:28:12.094573 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:28:12.094703 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:28:12.100479 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:28:12.106241 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:28:12.113326 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:28:12.125745 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:28:12.139345 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:12.149749 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:28:12.159500 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:12.163404 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:12.172194 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:28:12.178284 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:28:12.178452 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:12.188222 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:28:12.193437 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:28:12.194555 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:28:12.195501 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:12.195943 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:12.196356 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:28:12.196813 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:12.197240 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:28:12.197654 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:28:12.198058 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:28:12.198425 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:28:12.198545 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:12.199535 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:12.199972 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:12.200351 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:28:12.213959 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:12.264250 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:28:12.264416 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:12.277995 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:28:12.278198 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:12.284309 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:28:12.284459 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:28:12.292088 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:28:12.292229 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:12.303941 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:28:12.310050 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:28:12.315359 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:28:12.315692 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:12.324010 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:28:12.324121 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:12.334639 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:28:12.334747 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:28:12.345265 ignition[1116]: INFO : Ignition 2.19.0 Apr 30 03:28:12.345265 ignition[1116]: INFO : Stage: umount Apr 30 03:28:12.345265 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:12.345265 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:12.352155 ignition[1116]: INFO : umount: umount passed Apr 30 03:28:12.352155 ignition[1116]: INFO : Ignition finished successfully Apr 30 03:28:12.348179 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:28:12.348315 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:28:12.350524 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:28:12.350579 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:28:12.351344 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:28:12.351383 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:28:12.355184 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:28:12.355225 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:28:12.355581 systemd[1]: Stopped target network.target - Network. Apr 30 03:28:12.355958 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:28:12.355997 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:12.356392 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:28:12.362447 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:28:12.396221 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:12.401746 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:28:12.407758 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:28:12.410201 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:28:12.410252 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:12.415016 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:28:12.415066 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:12.417849 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:28:12.417918 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:28:12.426172 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:28:12.428279 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:12.435907 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:28:12.443144 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:28:12.444453 systemd-networkd[872]: eth0: DHCPv6 lease lost Apr 30 03:28:12.448073 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:28:12.448632 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:28:12.448738 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:28:12.459936 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:28:12.460029 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:28:12.465379 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:28:12.465455 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:12.469410 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:28:12.469469 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:12.488687 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:28:12.496087 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:28:12.496154 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:12.501748 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:12.507099 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:28:12.507195 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:28:12.523821 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:28:12.523944 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:12.531069 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:28:12.531132 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:12.531932 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:28:12.531976 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:12.550148 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:28:12.550302 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:12.556263 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:28:12.556343 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:12.561469 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:28:12.561511 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:12.566701 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:28:12.566753 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:12.572030 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:28:12.572076 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:12.595764 kernel: hv_netvsc 6045bddf-7bf7-6045-bddf-7bf76045bddf eth0: Data path switched from VF: enP54414s1 Apr 30 03:28:12.576986 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:12.577037 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:12.601841 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:28:12.607227 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:28:12.607293 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:12.613087 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:28:12.616163 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:12.625313 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:28:12.625370 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:12.630905 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:12.630957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:12.634277 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:28:12.634356 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:28:12.642343 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:28:12.642446 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:28:12.647750 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:28:12.663788 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:28:12.673178 systemd[1]: Switching root. Apr 30 03:28:12.732791 systemd-journald[176]: Journal stopped Apr 30 03:28:00.112710 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:28:00.112744 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:00.112759 kernel: BIOS-provided physical RAM map: Apr 30 03:28:00.112770 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:28:00.112780 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 30 03:28:00.112790 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Apr 30 03:28:00.112802 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Apr 30 03:28:00.112816 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Apr 30 03:28:00.112827 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 30 03:28:00.112838 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 30 03:28:00.112848 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 30 03:28:00.112860 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 30 03:28:00.112871 kernel: printk: bootconsole [earlyser0] enabled Apr 30 03:28:00.112883 kernel: NX (Execute Disable) protection: active Apr 30 03:28:00.112901 kernel: APIC: Static calls initialized Apr 30 03:28:00.112914 kernel: efi: EFI v2.7 by Microsoft Apr 30 03:28:00.112927 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Apr 30 03:28:00.112939 kernel: SMBIOS 3.1.0 present. Apr 30 03:28:00.112953 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Apr 30 03:28:00.112966 kernel: Hypervisor detected: Microsoft Hyper-V Apr 30 03:28:00.112979 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 30 03:28:00.112990 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Apr 30 03:28:00.113002 kernel: Hyper-V: Nested features: 0x1e0101 Apr 30 03:28:00.113013 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 30 03:28:00.113027 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 30 03:28:00.113040 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:28:00.113052 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:28:00.113065 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 30 03:28:00.113075 kernel: tsc: Detected 2593.906 MHz processor Apr 30 03:28:00.113087 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:28:00.113099 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:28:00.113111 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 30 03:28:00.113123 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:28:00.113139 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:28:00.113152 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 30 03:28:00.113166 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 30 03:28:00.113179 kernel: Using GB pages for direct mapping Apr 30 03:28:00.113193 kernel: Secure boot disabled Apr 30 03:28:00.113208 kernel: ACPI: Early table checksum verification disabled Apr 30 03:28:00.113225 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 30 03:28:00.113244 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113261 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113276 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Apr 30 03:28:00.113314 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 30 03:28:00.113328 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113341 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113355 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113373 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113387 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113401 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113415 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:28:00.113430 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 30 03:28:00.113444 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Apr 30 03:28:00.113458 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 30 03:28:00.113473 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 30 03:28:00.113492 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 30 03:28:00.113506 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 30 03:28:00.113520 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 30 03:28:00.113533 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Apr 30 03:28:00.113546 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 30 03:28:00.113560 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Apr 30 03:28:00.113574 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:28:00.113587 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:28:00.113601 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 30 03:28:00.113618 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 30 03:28:00.113632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 30 03:28:00.113643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 30 03:28:00.113656 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 30 03:28:00.113669 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 30 03:28:00.113682 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 30 03:28:00.113696 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 30 03:28:00.113709 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 30 03:28:00.113723 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 30 03:28:00.113739 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Apr 30 03:28:00.113752 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Apr 30 03:28:00.113766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Apr 30 03:28:00.113779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Apr 30 03:28:00.113792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Apr 30 03:28:00.113806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Apr 30 03:28:00.113820 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 30 03:28:00.113834 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 30 03:28:00.113849 kernel: Zone ranges: Apr 30 03:28:00.113866 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:28:00.113880 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 03:28:00.113894 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:28:00.113908 kernel: Movable zone start for each node Apr 30 03:28:00.113922 kernel: Early memory node ranges Apr 30 03:28:00.113936 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:28:00.113951 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Apr 30 03:28:00.113965 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 30 03:28:00.113979 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:28:00.113997 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 30 03:28:00.114011 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:28:00.114026 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:28:00.114040 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Apr 30 03:28:00.114054 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 30 03:28:00.114069 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 30 03:28:00.114082 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:28:00.114097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:28:00.114111 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:28:00.114128 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 30 03:28:00.114143 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:28:00.114157 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 30 03:28:00.114171 kernel: Booting paravirtualized kernel on Hyper-V Apr 30 03:28:00.114185 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:28:00.114199 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:28:00.114213 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:28:00.114227 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:28:00.114241 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:28:00.114258 kernel: Hyper-V: PV spinlocks enabled Apr 30 03:28:00.114272 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:28:00.114288 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:00.116332 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:28:00.116344 kernel: random: crng init done Apr 30 03:28:00.116352 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 30 03:28:00.116364 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:28:00.116371 kernel: Fallback order for Node 0: 0 Apr 30 03:28:00.116386 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Apr 30 03:28:00.116403 kernel: Policy zone: Normal Apr 30 03:28:00.116414 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:28:00.116424 kernel: software IO TLB: area num 2. Apr 30 03:28:00.116433 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 310124K reserved, 0K cma-reserved) Apr 30 03:28:00.116442 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:28:00.116453 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:28:00.116461 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:28:00.116473 kernel: Dynamic Preempt: voluntary Apr 30 03:28:00.116483 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:28:00.116494 kernel: rcu: RCU event tracing is enabled. Apr 30 03:28:00.116507 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:28:00.116516 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:28:00.116525 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:28:00.116535 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:28:00.116543 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:28:00.116556 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:28:00.116564 kernel: Using NULL legacy PIC Apr 30 03:28:00.116576 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 30 03:28:00.116585 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:28:00.116595 kernel: Console: colour dummy device 80x25 Apr 30 03:28:00.116605 kernel: printk: console [tty1] enabled Apr 30 03:28:00.116615 kernel: printk: console [ttyS0] enabled Apr 30 03:28:00.116625 kernel: printk: bootconsole [earlyser0] disabled Apr 30 03:28:00.116633 kernel: ACPI: Core revision 20230628 Apr 30 03:28:00.116644 kernel: Failed to register legacy timer interrupt Apr 30 03:28:00.116654 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:28:00.116665 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 03:28:00.116673 kernel: Hyper-V: Using IPI hypercalls Apr 30 03:28:00.116684 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 30 03:28:00.116693 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 30 03:28:00.116702 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 30 03:28:00.116713 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 30 03:28:00.116721 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 30 03:28:00.116733 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 30 03:28:00.116743 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Apr 30 03:28:00.116754 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:28:00.116763 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:28:00.116772 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:28:00.116782 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:28:00.116790 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:28:00.116801 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:28:00.116809 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 03:28:00.116821 kernel: RETBleed: Vulnerable Apr 30 03:28:00.116831 kernel: Speculative Store Bypass: Vulnerable Apr 30 03:28:00.116843 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:28:00.116852 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:28:00.116862 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:28:00.116870 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:28:00.116882 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:28:00.116890 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 03:28:00.116901 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 03:28:00.116909 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 03:28:00.116918 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:28:00.116928 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 30 03:28:00.116939 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 30 03:28:00.116949 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 30 03:28:00.116957 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 30 03:28:00.116969 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:28:00.116977 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:28:00.116987 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:28:00.116995 kernel: landlock: Up and running. Apr 30 03:28:00.117003 kernel: SELinux: Initializing. Apr 30 03:28:00.117015 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:00.117022 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:00.117034 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 03:28:00.117042 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:00.117055 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:00.117064 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:00.117076 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 03:28:00.117084 kernel: signal: max sigframe size: 3632 Apr 30 03:28:00.117096 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:28:00.117105 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:28:00.117116 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:28:00.117125 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:28:00.117137 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:28:00.117148 kernel: .... node #0, CPUs: #1 Apr 30 03:28:00.117157 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 30 03:28:00.117166 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:28:00.117177 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:28:00.117185 kernel: smpboot: Max logical packages: 1 Apr 30 03:28:00.117197 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 30 03:28:00.117205 kernel: devtmpfs: initialized Apr 30 03:28:00.117216 kernel: x86/mm: Memory block size: 128MB Apr 30 03:28:00.117227 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 30 03:28:00.117238 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:28:00.117246 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:28:00.117257 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:28:00.117265 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:28:00.117275 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:28:00.117285 kernel: audit: type=2000 audit(1745983678.029:1): state=initialized audit_enabled=0 res=1 Apr 30 03:28:00.117292 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:28:00.117309 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:28:00.117325 kernel: cpuidle: using governor menu Apr 30 03:28:00.117333 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:28:00.117343 kernel: dca service started, version 1.12.1 Apr 30 03:28:00.117352 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Apr 30 03:28:00.117361 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:28:00.117372 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:28:00.117380 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:28:00.117391 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:28:00.117399 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:28:00.117412 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:28:00.117420 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:28:00.117428 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:28:00.117436 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:28:00.117444 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:28:00.117452 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:28:00.117460 kernel: ACPI: Interpreter enabled Apr 30 03:28:00.117467 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:28:00.117475 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:28:00.117485 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:28:00.117493 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 03:28:00.117501 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 30 03:28:00.117509 kernel: iommu: Default domain type: Translated Apr 30 03:28:00.117517 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:28:00.117524 kernel: efivars: Registered efivars operations Apr 30 03:28:00.117532 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:28:00.117541 kernel: PCI: System does not support PCI Apr 30 03:28:00.117551 kernel: vgaarb: loaded Apr 30 03:28:00.117561 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 30 03:28:00.117572 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:28:00.117580 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:28:00.117591 kernel: pnp: PnP ACPI init Apr 30 03:28:00.117600 kernel: pnp: PnP ACPI: found 3 devices Apr 30 03:28:00.117608 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:28:00.117619 kernel: NET: Registered PF_INET protocol family Apr 30 03:28:00.117627 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:28:00.117639 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 30 03:28:00.117649 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:28:00.117661 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:28:00.117669 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 03:28:00.117680 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 30 03:28:00.117689 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:28:00.117700 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:28:00.117708 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:28:00.117720 kernel: NET: Registered PF_XDP protocol family Apr 30 03:28:00.117728 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:28:00.117741 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 03:28:00.117750 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Apr 30 03:28:00.117759 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:28:00.117769 kernel: Initialise system trusted keyrings Apr 30 03:28:00.117777 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 30 03:28:00.117788 kernel: Key type asymmetric registered Apr 30 03:28:00.117796 kernel: Asymmetric key parser 'x509' registered Apr 30 03:28:00.117807 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:28:00.117816 kernel: io scheduler mq-deadline registered Apr 30 03:28:00.117829 kernel: io scheduler kyber registered Apr 30 03:28:00.117837 kernel: io scheduler bfq registered Apr 30 03:28:00.117847 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:28:00.117857 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:28:00.117864 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:28:00.117876 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 03:28:00.117883 kernel: i8042: PNP: No PS/2 controller found. Apr 30 03:28:00.118014 kernel: rtc_cmos 00:02: registered as rtc0 Apr 30 03:28:00.118109 kernel: rtc_cmos 00:02: setting system clock to 2025-04-30T03:27:59 UTC (1745983679) Apr 30 03:28:00.118204 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 30 03:28:00.118219 kernel: intel_pstate: CPU model not supported Apr 30 03:28:00.118227 kernel: efifb: probing for efifb Apr 30 03:28:00.118235 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 03:28:00.118247 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 03:28:00.118267 kernel: efifb: scrolling: redraw Apr 30 03:28:00.118275 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 03:28:00.118287 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:28:00.118307 kernel: fb0: EFI VGA frame buffer device Apr 30 03:28:00.118322 kernel: pstore: Using crash dump compression: deflate Apr 30 03:28:00.118333 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:28:00.118341 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:28:00.118349 kernel: Segment Routing with IPv6 Apr 30 03:28:00.118360 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:28:00.118381 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:28:00.118391 kernel: Key type dns_resolver registered Apr 30 03:28:00.118407 kernel: IPI shorthand broadcast: enabled Apr 30 03:28:00.118427 kernel: sched_clock: Marking stable (993003700, 56840200)->(1311838900, -261995000) Apr 30 03:28:00.118441 kernel: registered taskstats version 1 Apr 30 03:28:00.118449 kernel: Loading compiled-in X.509 certificates Apr 30 03:28:00.118465 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:28:00.118481 kernel: Key type .fscrypt registered Apr 30 03:28:00.118495 kernel: Key type fscrypt-provisioning registered Apr 30 03:28:00.118503 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:28:00.118516 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:28:00.118540 kernel: ima: No architecture policies found Apr 30 03:28:00.118556 kernel: clk: Disabling unused clocks Apr 30 03:28:00.118568 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:28:00.118576 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:28:00.118589 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:28:00.118605 kernel: Run /init as init process Apr 30 03:28:00.118621 kernel: with arguments: Apr 30 03:28:00.118630 kernel: /init Apr 30 03:28:00.118639 kernel: with environment: Apr 30 03:28:00.118657 kernel: HOME=/ Apr 30 03:28:00.118667 kernel: TERM=linux Apr 30 03:28:00.118676 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:28:00.118700 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:00.118714 systemd[1]: Detected virtualization microsoft. Apr 30 03:28:00.118725 systemd[1]: Detected architecture x86-64. Apr 30 03:28:00.118747 systemd[1]: Running in initrd. Apr 30 03:28:00.118763 systemd[1]: No hostname configured, using default hostname. Apr 30 03:28:00.118774 systemd[1]: Hostname set to . Apr 30 03:28:00.118792 systemd[1]: Initializing machine ID from random generator. Apr 30 03:28:00.118811 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:28:00.118825 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:00.118834 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:00.118850 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:28:00.118866 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:00.118877 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:28:00.118893 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:28:00.118913 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:28:00.118926 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:28:00.118934 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:00.118946 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:00.118961 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:00.118978 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:00.118995 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:00.119004 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:00.119015 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:00.119033 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:00.119047 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:28:00.119055 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:28:00.119070 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:00.119086 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:00.119098 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:00.119115 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:00.119128 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:28:00.119136 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:00.119155 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:28:00.119169 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:28:00.119177 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:00.119192 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:00.119228 systemd-journald[176]: Collecting audit messages is disabled. Apr 30 03:28:00.119265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:00.119274 systemd-journald[176]: Journal started Apr 30 03:28:00.119315 systemd-journald[176]: Runtime Journal (/run/log/journal/09abdb7141f345a9aa7fbacd22b3663a) is 8.0M, max 158.8M, 150.8M free. Apr 30 03:28:00.141056 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:00.141477 systemd-modules-load[177]: Inserted module 'overlay' Apr 30 03:28:00.141659 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:00.148718 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:00.155107 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:28:00.159530 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:00.182462 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:28:00.183514 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:00.192516 kernel: Bridge firewalling registered Apr 30 03:28:00.188559 systemd-modules-load[177]: Inserted module 'br_netfilter' Apr 30 03:28:00.194955 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:00.209452 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:00.216403 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:00.219843 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:00.223328 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:00.226723 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:00.242036 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:28:00.247862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:00.260431 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:00.268972 dracut-cmdline[204]: dracut-dracut-053 Apr 30 03:28:00.268972 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:00.290356 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:00.295455 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:00.305573 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:00.344627 systemd-resolved[243]: Positive Trust Anchors: Apr 30 03:28:00.344641 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:00.344698 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:00.369687 systemd-resolved[243]: Defaulting to hostname 'linux'. Apr 30 03:28:00.372964 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:00.373969 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:00.399316 kernel: SCSI subsystem initialized Apr 30 03:28:00.409312 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:28:00.423319 kernel: iscsi: registered transport (tcp) Apr 30 03:28:00.444159 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:28:00.444215 kernel: QLogic iSCSI HBA Driver Apr 30 03:28:00.479252 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:00.487457 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:28:00.516376 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:28:00.516452 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:28:00.520055 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:28:00.560321 kernel: raid6: avx512x4 gen() 18256 MB/s Apr 30 03:28:00.579315 kernel: raid6: avx512x2 gen() 18232 MB/s Apr 30 03:28:00.598312 kernel: raid6: avx512x1 gen() 18313 MB/s Apr 30 03:28:00.616308 kernel: raid6: avx2x4 gen() 18243 MB/s Apr 30 03:28:00.635312 kernel: raid6: avx2x2 gen() 18273 MB/s Apr 30 03:28:00.655497 kernel: raid6: avx2x1 gen() 14103 MB/s Apr 30 03:28:00.655539 kernel: raid6: using algorithm avx512x1 gen() 18313 MB/s Apr 30 03:28:00.676085 kernel: raid6: .... xor() 26885 MB/s, rmw enabled Apr 30 03:28:00.676115 kernel: raid6: using avx512x2 recovery algorithm Apr 30 03:28:00.699320 kernel: xor: automatically using best checksumming function avx Apr 30 03:28:00.845324 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:28:00.855123 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:00.865449 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:00.878601 systemd-udevd[398]: Using default interface naming scheme 'v255'. Apr 30 03:28:00.882983 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:00.904469 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:28:00.919431 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Apr 30 03:28:00.947847 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:00.968567 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:01.010731 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:01.027640 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:28:01.068917 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:01.075722 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:01.079292 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:01.088780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:01.101243 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:28:01.098649 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:28:01.122585 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:01.125484 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:01.132336 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:01.138430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:01.150867 kernel: hv_vmbus: Vmbus version:5.2 Apr 30 03:28:01.150898 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:28:01.138621 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:01.141503 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:01.159316 kernel: AES CTR mode by8 optimization enabled Apr 30 03:28:01.163718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:01.169546 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:01.193268 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 03:28:01.193362 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 03:28:01.203509 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:01.205248 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:01.220465 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:01.233489 kernel: PTP clock support registered Apr 30 03:28:01.244326 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 03:28:01.244384 kernel: hv_vmbus: registering driver hv_utils Apr 30 03:28:01.246370 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 03:28:01.248918 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 03:28:01.251154 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 03:28:01.771283 systemd-resolved[243]: Clock change detected. Flushing caches. Apr 30 03:28:01.783724 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 03:28:01.782801 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:01.801697 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 30 03:28:01.801728 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:28:01.801742 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 03:28:01.801753 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 03:28:01.807100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:01.820006 kernel: scsi host0: storvsc_host_t Apr 30 03:28:01.820315 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 03:28:01.820365 kernel: scsi host1: storvsc_host_t Apr 30 03:28:01.827690 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 03:28:01.831606 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 03:28:01.838765 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 30 03:28:01.838810 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 03:28:01.853204 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:01.868193 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 03:28:01.869922 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:28:01.869951 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 03:28:01.883883 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 03:28:01.899778 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 03:28:01.900000 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:28:01.900171 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 03:28:01.900343 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 03:28:01.900500 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:01.900526 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:28:01.990830 kernel: hv_netvsc 6045bddf-7bf7-6045-bddf-7bf76045bddf eth0: VF slot 1 added Apr 30 03:28:01.999951 kernel: hv_vmbus: registering driver hv_pci Apr 30 03:28:02.005159 kernel: hv_pci 12ddb82a-d48e-4535-895a-0e7528f09d6a: PCI VMBus probing: Using version 0x10004 Apr 30 03:28:02.051411 kernel: hv_pci 12ddb82a-d48e-4535-895a-0e7528f09d6a: PCI host bridge to bus d48e:00 Apr 30 03:28:02.051838 kernel: pci_bus d48e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 30 03:28:02.052027 kernel: pci_bus d48e:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 03:28:02.052180 kernel: pci d48e:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 30 03:28:02.052371 kernel: pci d48e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:28:02.052546 kernel: pci d48e:00:02.0: enabling Extended Tags Apr 30 03:28:02.052742 kernel: pci d48e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d48e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 30 03:28:02.052924 kernel: pci_bus d48e:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 03:28:02.053073 kernel: pci d48e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:28:02.217850 kernel: mlx5_core d48e:00:02.0: enabling device (0000 -> 0002) Apr 30 03:28:02.450243 kernel: mlx5_core d48e:00:02.0: firmware version: 14.30.5000 Apr 30 03:28:02.450470 kernel: hv_netvsc 6045bddf-7bf7-6045-bddf-7bf76045bddf eth0: VF registering: eth1 Apr 30 03:28:02.450654 kernel: mlx5_core d48e:00:02.0 eth1: joined to eth0 Apr 30 03:28:02.450837 kernel: mlx5_core d48e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 03:28:02.352620 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 03:28:02.458611 kernel: mlx5_core d48e:00:02.0 enP54414s1: renamed from eth1 Apr 30 03:28:02.469674 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (454) Apr 30 03:28:02.487441 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 03:28:02.490913 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 03:28:02.501765 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:28:03.371618 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (460) Apr 30 03:28:03.385852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 03:28:03.489437 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 03:28:03.514040 disk-uuid[592]: Warning: The kernel is still using the old partition table. Apr 30 03:28:03.514040 disk-uuid[592]: The new table will be used at the next reboot or after you Apr 30 03:28:03.514040 disk-uuid[592]: run partprobe(8) or kpartx(8) Apr 30 03:28:03.514040 disk-uuid[592]: The operation has completed successfully. Apr 30 03:28:03.691478 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:28:03.691587 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:28:03.701745 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:28:03.707130 sh[688]: Success Apr 30 03:28:03.733660 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:28:03.924807 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:28:03.937703 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:28:03.942711 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:28:03.959406 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:28:03.959465 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:03.963078 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:28:03.965985 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:28:03.968686 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:28:04.212106 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:28:04.217157 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:28:04.229838 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:28:04.234733 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:28:04.260531 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:04.260578 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:04.260618 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:04.278616 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:04.291608 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:04.291885 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:28:04.301738 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:28:04.311800 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:28:04.326949 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:04.338802 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:04.359536 systemd-networkd[872]: lo: Link UP Apr 30 03:28:04.359545 systemd-networkd[872]: lo: Gained carrier Apr 30 03:28:04.361641 systemd-networkd[872]: Enumeration completed Apr 30 03:28:04.361862 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:04.365011 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:04.365014 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:04.366104 systemd[1]: Reached target network.target - Network. Apr 30 03:28:04.429610 kernel: mlx5_core d48e:00:02.0 enP54414s1: Link up Apr 30 03:28:04.466096 kernel: hv_netvsc 6045bddf-7bf7-6045-bddf-7bf76045bddf eth0: Data path switched to VF: enP54414s1 Apr 30 03:28:04.465677 systemd-networkd[872]: enP54414s1: Link UP Apr 30 03:28:04.465798 systemd-networkd[872]: eth0: Link UP Apr 30 03:28:04.465956 systemd-networkd[872]: eth0: Gained carrier Apr 30 03:28:04.465968 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:04.471492 systemd-networkd[872]: enP54414s1: Gained carrier Apr 30 03:28:04.494647 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:28:05.162893 ignition[851]: Ignition 2.19.0 Apr 30 03:28:05.162904 ignition[851]: Stage: fetch-offline Apr 30 03:28:05.164507 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:05.162955 ignition[851]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:05.162966 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:05.163098 ignition[851]: parsed url from cmdline: "" Apr 30 03:28:05.163104 ignition[851]: no config URL provided Apr 30 03:28:05.163112 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:05.163122 ignition[851]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:05.163128 ignition[851]: failed to fetch config: resource requires networking Apr 30 03:28:05.163397 ignition[851]: Ignition finished successfully Apr 30 03:28:05.194767 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:28:05.211437 ignition[881]: Ignition 2.19.0 Apr 30 03:28:05.211448 ignition[881]: Stage: fetch Apr 30 03:28:05.211691 ignition[881]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:05.211711 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:05.211795 ignition[881]: parsed url from cmdline: "" Apr 30 03:28:05.211798 ignition[881]: no config URL provided Apr 30 03:28:05.211804 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:05.211813 ignition[881]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:05.211834 ignition[881]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 03:28:05.299456 ignition[881]: GET result: OK Apr 30 03:28:05.299648 ignition[881]: config has been read from IMDS userdata Apr 30 03:28:05.299678 ignition[881]: parsing config with SHA512: c6294e75d68f69b36ecb8ca3068eef377fef38b4160d3e77dc0626971cd78fdc43d372358f869c80927f4b0279cb8e265982822275d8c2beac7554a2647d6f7e Apr 30 03:28:05.306951 unknown[881]: fetched base config from "system" Apr 30 03:28:05.306983 unknown[881]: fetched base config from "system" Apr 30 03:28:05.309004 ignition[881]: fetch: fetch complete Apr 30 03:28:05.306993 unknown[881]: fetched user config from "azure" Apr 30 03:28:05.309013 ignition[881]: fetch: fetch passed Apr 30 03:28:05.311107 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:28:05.309086 ignition[881]: Ignition finished successfully Apr 30 03:28:05.321701 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:28:05.338420 ignition[888]: Ignition 2.19.0 Apr 30 03:28:05.338430 ignition[888]: Stage: kargs Apr 30 03:28:05.338661 ignition[888]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:05.338675 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:05.339871 ignition[888]: kargs: kargs passed Apr 30 03:28:05.339911 ignition[888]: Ignition finished successfully Apr 30 03:28:05.350000 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:28:05.358761 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:28:05.372647 ignition[894]: Ignition 2.19.0 Apr 30 03:28:05.372657 ignition[894]: Stage: disks Apr 30 03:28:05.372866 ignition[894]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:05.372876 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:05.373708 ignition[894]: disks: disks passed Apr 30 03:28:05.373752 ignition[894]: Ignition finished successfully Apr 30 03:28:05.384064 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:28:05.386631 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:05.391617 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:28:05.394572 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:05.400298 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:05.405771 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:05.415774 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:28:05.473374 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 03:28:05.479238 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:28:05.492101 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:28:05.579623 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:28:05.580190 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:28:05.582905 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:05.620748 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:05.625719 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:28:05.637047 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (913) Apr 30 03:28:05.645783 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:05.645835 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:05.646043 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:28:05.651650 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:05.657904 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:05.657312 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:28:05.657355 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:05.669824 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:05.674391 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:28:05.692760 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:28:05.748809 systemd-networkd[872]: enP54414s1: Gained IPv6LL Apr 30 03:28:05.749184 systemd-networkd[872]: eth0: Gained IPv6LL Apr 30 03:28:06.150030 coreos-metadata[915]: Apr 30 03:28:06.149 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 03:28:06.156338 coreos-metadata[915]: Apr 30 03:28:06.156 INFO Fetch successful Apr 30 03:28:06.156338 coreos-metadata[915]: Apr 30 03:28:06.156 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 03:28:06.178685 coreos-metadata[915]: Apr 30 03:28:06.178 INFO Fetch successful Apr 30 03:28:06.199379 initrd-setup-root[941]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:28:06.203581 coreos-metadata[915]: Apr 30 03:28:06.202 INFO wrote hostname ci-4081.3.3-a-e2728433b6 to /sysroot/etc/hostname Apr 30 03:28:06.203455 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:06.236383 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:28:06.245040 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:28:06.249395 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:28:07.024980 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:07.035711 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:28:07.042694 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:28:07.054219 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:07.054466 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:28:07.086188 ignition[1035]: INFO : Ignition 2.19.0 Apr 30 03:28:07.088820 ignition[1035]: INFO : Stage: mount Apr 30 03:28:07.088820 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:07.088820 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:07.088820 ignition[1035]: INFO : mount: mount passed Apr 30 03:28:07.088820 ignition[1035]: INFO : Ignition finished successfully Apr 30 03:28:07.090701 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:28:07.096355 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:28:07.114698 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:28:07.121288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:07.135611 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1047) Apr 30 03:28:07.139610 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:07.139643 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:07.144551 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:07.149610 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:07.151309 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:07.172223 ignition[1063]: INFO : Ignition 2.19.0 Apr 30 03:28:07.172223 ignition[1063]: INFO : Stage: files Apr 30 03:28:07.178326 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:07.178326 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:07.178326 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:28:07.200251 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:28:07.200251 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:28:07.282177 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:28:07.286749 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:28:07.286749 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:28:07.282681 unknown[1063]: wrote ssh authorized keys file for user: core Apr 30 03:28:07.297579 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:07.305304 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:28:07.443314 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:28:11.069631 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:11.075043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:28:11.614860 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:28:11.943556 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:11.943556 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:28:11.975892 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:11.984280 ignition[1063]: INFO : files: files passed Apr 30 03:28:11.984280 ignition[1063]: INFO : Ignition finished successfully Apr 30 03:28:11.978617 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:28:12.008285 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:28:12.020900 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:28:12.024155 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:28:12.024283 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:28:12.038504 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:12.038504 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:12.051416 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:12.040046 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:12.047947 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:28:12.063655 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:28:12.094573 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:28:12.094703 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:28:12.100479 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:28:12.106241 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:28:12.113326 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:28:12.125745 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:28:12.139345 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:12.149749 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:28:12.159500 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:12.163404 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:12.172194 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:28:12.178284 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:28:12.178452 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:12.188222 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:28:12.193437 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:28:12.194555 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:28:12.195501 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:12.195943 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:12.196356 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:28:12.196813 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:12.197240 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:28:12.197654 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:28:12.198058 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:28:12.198425 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:28:12.198545 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:12.199535 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:12.199972 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:12.200351 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:28:12.213959 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:12.264250 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:28:12.264416 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:12.277995 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:28:12.278198 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:12.284309 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:28:12.284459 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:28:12.292088 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:28:12.292229 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:12.303941 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:28:12.310050 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:28:12.315359 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:28:12.315692 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:12.324010 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:28:12.324121 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:12.334639 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:28:12.334747 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:28:12.345265 ignition[1116]: INFO : Ignition 2.19.0 Apr 30 03:28:12.345265 ignition[1116]: INFO : Stage: umount Apr 30 03:28:12.345265 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:12.345265 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:12.352155 ignition[1116]: INFO : umount: umount passed Apr 30 03:28:12.352155 ignition[1116]: INFO : Ignition finished successfully Apr 30 03:28:12.348179 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:28:12.348315 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:28:12.350524 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:28:12.350579 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:28:12.351344 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:28:12.351383 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:28:12.355184 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:28:12.355225 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:28:12.355581 systemd[1]: Stopped target network.target - Network. Apr 30 03:28:12.355958 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:28:12.355997 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:12.356392 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:28:12.362447 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:28:12.396221 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:12.401746 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:28:12.407758 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:28:12.410201 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:28:12.410252 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:12.415016 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:28:12.415066 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:12.417849 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:28:12.417918 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:28:12.426172 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:28:12.428279 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:12.435907 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:28:12.443144 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:28:12.444453 systemd-networkd[872]: eth0: DHCPv6 lease lost Apr 30 03:28:12.448073 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:28:12.448632 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:28:12.448738 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:28:12.459936 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:28:12.460029 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:28:12.465379 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:28:12.465455 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:12.469410 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:28:12.469469 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:12.488687 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:28:12.496087 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:28:12.496154 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:12.501748 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:12.507099 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:28:12.507195 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:28:12.523821 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:28:12.523944 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:12.531069 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:28:12.531132 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:12.531932 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:28:12.531976 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:12.550148 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:28:12.550302 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:12.556263 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:28:12.556343 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:12.561469 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:28:12.561511 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:12.566701 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:28:12.566753 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:12.572030 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:28:12.572076 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:12.595764 kernel: hv_netvsc 6045bddf-7bf7-6045-bddf-7bf76045bddf eth0: Data path switched from VF: enP54414s1 Apr 30 03:28:12.576986 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:12.577037 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:12.601841 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:28:12.607227 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:28:12.607293 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:12.613087 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:28:12.616163 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:12.625313 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:28:12.625370 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:12.630905 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:12.630957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:12.634277 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:28:12.634356 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:28:12.642343 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:28:12.642446 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:28:12.647750 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:28:12.663788 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:28:12.673178 systemd[1]: Switching root. Apr 30 03:28:12.732791 systemd-journald[176]: Journal stopped Apr 30 03:28:17.608618 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Apr 30 03:28:17.608657 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:28:17.608669 kernel: SELinux: policy capability open_perms=1 Apr 30 03:28:17.608677 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:28:17.608686 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:28:17.608694 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:28:17.608703 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:28:17.608713 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:28:17.608722 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:28:17.608730 kernel: audit: type=1403 audit(1745983694.537:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:28:17.608739 systemd[1]: Successfully loaded SELinux policy in 138.728ms. Apr 30 03:28:17.608749 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.843ms. Apr 30 03:28:17.608762 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:17.608772 systemd[1]: Detected virtualization microsoft. Apr 30 03:28:17.608785 systemd[1]: Detected architecture x86-64. Apr 30 03:28:17.608794 systemd[1]: Detected first boot. Apr 30 03:28:17.608804 systemd[1]: Hostname set to . Apr 30 03:28:17.608813 systemd[1]: Initializing machine ID from random generator. Apr 30 03:28:17.608823 zram_generator::config[1159]: No configuration found. Apr 30 03:28:17.608836 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:28:17.608845 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:28:17.608855 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:28:17.608864 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:28:17.608878 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:28:17.608888 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:28:17.608899 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:28:17.608914 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:28:17.608924 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:28:17.608936 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:28:17.608947 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:28:17.608958 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:28:17.608970 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:17.608982 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:17.608994 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:28:17.609008 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:28:17.609019 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:28:17.609032 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:17.609042 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:28:17.609054 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:17.609064 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:28:17.609080 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:28:17.609091 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:17.609106 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:28:17.609118 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:17.609129 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:17.609141 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:17.609152 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:17.609165 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:28:17.609176 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:28:17.609190 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:17.609203 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:17.609214 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:17.609227 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:28:17.609238 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:28:17.609251 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:28:17.609261 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:28:17.609274 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:17.609286 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:28:17.609297 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:28:17.609310 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:28:17.609322 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:28:17.609334 systemd[1]: Reached target machines.target - Containers. Apr 30 03:28:17.609349 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:28:17.609360 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:17.609373 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:17.609383 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:28:17.609396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:17.609407 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:17.609419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:17.609430 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:28:17.609443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:17.609456 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:28:17.609469 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:28:17.609480 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:28:17.609494 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:28:17.609505 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:28:17.609517 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:17.609528 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:17.609540 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:28:17.609556 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:28:17.609600 systemd-journald[1265]: Collecting audit messages is disabled. Apr 30 03:28:17.609625 kernel: loop: module loaded Apr 30 03:28:17.609638 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:17.609651 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:28:17.609665 systemd[1]: Stopped verity-setup.service. Apr 30 03:28:17.609676 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:17.609690 kernel: ACPI: bus type drm_connector registered Apr 30 03:28:17.609699 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:28:17.609711 systemd-journald[1265]: Journal started Apr 30 03:28:17.609735 systemd-journald[1265]: Runtime Journal (/run/log/journal/b451709ea71c4b8d8909c37d0fce5f42) is 8.0M, max 158.8M, 150.8M free. Apr 30 03:28:16.854407 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:28:17.000518 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 03:28:17.000899 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:28:17.624618 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:17.629044 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:28:17.632128 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:28:17.634937 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:28:17.640335 kernel: fuse: init (API version 7.39) Apr 30 03:28:17.640813 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:28:17.643946 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:28:17.646958 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:28:17.650438 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:17.654435 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:28:17.654850 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:28:17.658697 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:17.658959 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:17.662693 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:17.662964 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:17.666247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:17.666535 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:17.671651 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:28:17.671949 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:28:17.675148 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:17.675443 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:17.678974 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:17.682198 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:28:17.694292 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:28:17.708081 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:17.714241 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:28:17.721686 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:28:17.725957 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:28:17.728814 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:28:17.728940 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:17.732876 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:28:17.740774 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:28:17.744693 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:28:17.747232 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:17.767754 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:28:17.771969 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:28:17.775138 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:17.779965 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:28:17.784457 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:17.785387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:17.792714 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:28:17.797351 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:17.805774 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:28:17.817011 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:28:17.822749 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:28:17.829234 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:28:17.832864 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:28:17.844986 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:28:17.849576 kernel: loop0: detected capacity change from 0 to 140768 Apr 30 03:28:17.852794 systemd-journald[1265]: Time spent on flushing to /var/log/journal/b451709ea71c4b8d8909c37d0fce5f42 is 28.104ms for 965 entries. Apr 30 03:28:17.852794 systemd-journald[1265]: System Journal (/var/log/journal/b451709ea71c4b8d8909c37d0fce5f42) is 8.0M, max 2.6G, 2.6G free. Apr 30 03:28:17.894901 systemd-journald[1265]: Received client request to flush runtime journal. Apr 30 03:28:17.857950 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:28:17.861453 udevadm[1298]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 03:28:17.893457 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:17.897069 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:28:17.914783 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:28:17.915389 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:28:17.922758 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Apr 30 03:28:17.922785 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Apr 30 03:28:17.929503 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:17.940809 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:28:18.148267 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:28:18.163415 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:18.186291 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Apr 30 03:28:18.186319 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Apr 30 03:28:18.191658 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:18.199618 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:28:18.234614 kernel: loop1: detected capacity change from 0 to 142488 Apr 30 03:28:18.599619 kernel: loop2: detected capacity change from 0 to 210664 Apr 30 03:28:18.646615 kernel: loop3: detected capacity change from 0 to 31056 Apr 30 03:28:18.988330 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:28:18.995759 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:19.002617 kernel: loop4: detected capacity change from 0 to 140768 Apr 30 03:28:19.017619 kernel: loop5: detected capacity change from 0 to 142488 Apr 30 03:28:19.034574 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Apr 30 03:28:19.036718 kernel: loop6: detected capacity change from 0 to 210664 Apr 30 03:28:19.043616 kernel: loop7: detected capacity change from 0 to 31056 Apr 30 03:28:19.046774 (sd-merge)[1325]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 30 03:28:19.047284 (sd-merge)[1325]: Merged extensions into '/usr'. Apr 30 03:28:19.050542 systemd[1]: Reloading requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:28:19.050559 systemd[1]: Reloading... Apr 30 03:28:19.113617 zram_generator::config[1351]: No configuration found. Apr 30 03:28:19.340707 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:19.354844 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:28:19.461619 kernel: hv_vmbus: registering driver hyperv_fb Apr 30 03:28:19.475806 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:28:19.476115 systemd[1]: Reloading finished in 425 ms. Apr 30 03:28:19.483340 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 30 03:28:19.488698 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 30 03:28:19.494088 kernel: hv_vmbus: registering driver hv_balloon Apr 30 03:28:19.494146 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:28:19.498644 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 30 03:28:19.517609 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:28:19.529799 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:19.537683 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:28:19.677920 systemd[1]: Starting ensure-sysext.service... Apr 30 03:28:19.692490 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:19.698140 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:19.718708 systemd[1]: Reloading requested from client PID 1448 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:28:19.718727 systemd[1]: Reloading... Apr 30 03:28:19.755611 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1416) Apr 30 03:28:19.834617 zram_generator::config[1486]: No configuration found. Apr 30 03:28:19.867194 systemd-tmpfiles[1454]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:28:19.867580 systemd-tmpfiles[1454]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:28:19.868616 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Apr 30 03:28:19.873731 systemd-tmpfiles[1454]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:28:19.874452 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Apr 30 03:28:19.874545 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Apr 30 03:28:19.907357 systemd-tmpfiles[1454]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:19.907375 systemd-tmpfiles[1454]: Skipping /boot Apr 30 03:28:19.939313 systemd-tmpfiles[1454]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:19.939328 systemd-tmpfiles[1454]: Skipping /boot Apr 30 03:28:20.083403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:20.161475 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 03:28:20.165220 systemd[1]: Reloading finished in 446 ms. Apr 30 03:28:20.195276 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:20.237211 systemd[1]: Finished ensure-sysext.service. Apr 30 03:28:20.241907 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:20.249796 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:20.273402 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:28:20.277788 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:20.279443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:20.292783 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:20.297825 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:20.303187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:20.309091 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:20.316823 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:28:20.322775 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:28:20.335887 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:20.340396 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:28:20.349762 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:28:20.355023 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:28:20.361055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:20.362115 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:20.364657 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:28:20.368235 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:20.368424 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:20.371626 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:20.371819 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:20.373584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:20.373717 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:20.374404 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:20.374520 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:20.386951 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:28:20.394279 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:28:20.400261 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:20.400337 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:20.401693 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:28:20.438867 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:28:20.490192 lvm[1604]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:20.504741 augenrules[1611]: No rules Apr 30 03:28:20.506072 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:20.539451 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:28:20.544049 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:20.552892 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:28:20.556260 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:28:20.559730 lvm[1624]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:20.587715 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:28:20.589967 systemd-resolved[1592]: Positive Trust Anchors: Apr 30 03:28:20.590223 systemd-resolved[1592]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:20.590308 systemd-resolved[1592]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:20.600701 systemd-networkd[1453]: lo: Link UP Apr 30 03:28:20.600711 systemd-networkd[1453]: lo: Gained carrier Apr 30 03:28:20.603088 systemd-networkd[1453]: Enumeration completed Apr 30 03:28:20.603271 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:20.603484 systemd-networkd[1453]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:20.603488 systemd-networkd[1453]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:20.608311 systemd-resolved[1592]: Using system hostname 'ci-4081.3.3-a-e2728433b6'. Apr 30 03:28:20.610752 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:28:20.659611 kernel: mlx5_core d48e:00:02.0 enP54414s1: Link up Apr 30 03:28:20.681681 kernel: hv_netvsc 6045bddf-7bf7-6045-bddf-7bf76045bddf eth0: Data path switched to VF: enP54414s1 Apr 30 03:28:20.683233 systemd-networkd[1453]: enP54414s1: Link UP Apr 30 03:28:20.683382 systemd-networkd[1453]: eth0: Link UP Apr 30 03:28:20.683388 systemd-networkd[1453]: eth0: Gained carrier Apr 30 03:28:20.683411 systemd-networkd[1453]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:20.685445 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:20.687284 systemd[1]: Reached target network.target - Network. Apr 30 03:28:20.687511 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:20.697789 systemd-networkd[1453]: enP54414s1: Gained carrier Apr 30 03:28:20.722729 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:20.736663 systemd-networkd[1453]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:28:21.295860 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:28:21.304685 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:28:22.452729 systemd-networkd[1453]: eth0: Gained IPv6LL Apr 30 03:28:22.455612 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:28:22.459911 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:28:22.556781 ldconfig[1291]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:28:22.567064 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:28:22.577915 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:28:22.590581 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:28:22.593771 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:22.596503 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:28:22.600488 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:28:22.604343 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:28:22.607718 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:28:22.610877 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:28:22.614088 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:28:22.614134 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:22.616563 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:22.619545 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:28:22.623683 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:28:22.630366 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:28:22.634016 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:28:22.636759 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:22.639094 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:22.641585 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:22.641650 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:22.644780 systemd-networkd[1453]: enP54414s1: Gained IPv6LL Apr 30 03:28:22.647791 systemd[1]: Starting chronyd.service - NTP client/server... Apr 30 03:28:22.651758 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:28:22.658839 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:28:22.663787 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:28:22.673785 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:28:22.678760 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:28:22.683895 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:28:22.683952 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 30 03:28:22.686764 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 30 03:28:22.689803 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 30 03:28:22.690990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:22.707805 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:28:22.712767 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:28:22.724092 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:28:22.726693 jq[1643]: false Apr 30 03:28:22.736740 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:28:22.741126 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:28:22.746453 KVP[1647]: KVP starting; pid is:1647 Apr 30 03:28:22.746998 (chronyd)[1639]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 30 03:28:22.754227 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:28:22.759701 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:28:22.760217 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:28:22.761787 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:28:22.766696 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:28:22.777955 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:28:22.778182 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:28:22.781777 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:28:22.782005 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:28:22.789325 KVP[1647]: KVP LIC Version: 3.1 Apr 30 03:28:22.789679 kernel: hv_utils: KVP IC version 4.0 Apr 30 03:28:22.800255 chronyd[1675]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 30 03:28:22.806434 (ntainerd)[1674]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:28:22.820282 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:28:22.820578 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:28:22.823367 chronyd[1675]: Timezone right/UTC failed leap second check, ignoring Apr 30 03:28:22.825369 chronyd[1675]: Loaded seccomp filter (level 2) Apr 30 03:28:22.826618 extend-filesystems[1646]: Found loop4 Apr 30 03:28:22.826618 extend-filesystems[1646]: Found loop5 Apr 30 03:28:22.826618 extend-filesystems[1646]: Found loop6 Apr 30 03:28:22.826618 extend-filesystems[1646]: Found loop7 Apr 30 03:28:22.826618 extend-filesystems[1646]: Found sda Apr 30 03:28:22.826618 extend-filesystems[1646]: Found sda1 Apr 30 03:28:22.826618 extend-filesystems[1646]: Found sda2 Apr 30 03:28:22.826618 extend-filesystems[1646]: Found sda3 Apr 30 03:28:22.826618 extend-filesystems[1646]: Found usr Apr 30 03:28:22.826618 extend-filesystems[1646]: Found sda4 Apr 30 03:28:22.826618 extend-filesystems[1646]: Found sda6 Apr 30 03:28:22.826618 extend-filesystems[1646]: Found sda7 Apr 30 03:28:22.826618 extend-filesystems[1646]: Found sda9 Apr 30 03:28:22.826618 extend-filesystems[1646]: Checking size of /dev/sda9 Apr 30 03:28:22.950455 extend-filesystems[1646]: Old size kept for /dev/sda9 Apr 30 03:28:22.950455 extend-filesystems[1646]: Found sr0 Apr 30 03:28:22.848164 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:28:22.956956 tar[1668]: linux-amd64/helm Apr 30 03:28:22.837287 dbus-daemon[1642]: [system] SELinux support is enabled Apr 30 03:28:22.856727 systemd[1]: Started chronyd.service - NTP client/server. Apr 30 03:28:22.957730 jq[1660]: true Apr 30 03:28:22.870079 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:28:22.870155 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:28:22.958225 jq[1685]: true Apr 30 03:28:22.881773 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:28:22.881799 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:28:22.926058 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:28:22.926302 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:28:22.943763 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:28:22.977760 update_engine[1659]: I20250430 03:28:22.975914 1659 main.cc:92] Flatcar Update Engine starting Apr 30 03:28:22.986127 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:28:22.994610 update_engine[1659]: I20250430 03:28:22.992792 1659 update_check_scheduler.cc:74] Next update check in 10m36s Apr 30 03:28:23.000872 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:28:23.021955 systemd-logind[1656]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:28:23.022755 systemd-logind[1656]: New seat seat0. Apr 30 03:28:23.026274 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:28:23.076647 coreos-metadata[1641]: Apr 30 03:28:23.076 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 03:28:23.081059 coreos-metadata[1641]: Apr 30 03:28:23.080 INFO Fetch successful Apr 30 03:28:23.084613 coreos-metadata[1641]: Apr 30 03:28:23.083 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 30 03:28:23.089756 bash[1716]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:28:23.092663 coreos-metadata[1641]: Apr 30 03:28:23.092 INFO Fetch successful Apr 30 03:28:23.093639 coreos-metadata[1641]: Apr 30 03:28:23.093 INFO Fetching http://168.63.129.16/machine/329f318b-aecf-4453-8890-7b38b5699679/f3d308cb%2D5543%2D4fa8%2Da98e%2D362386a06a9e.%5Fci%2D4081.3.3%2Da%2De2728433b6?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 30 03:28:23.095472 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:28:23.100571 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 03:28:23.101607 coreos-metadata[1641]: Apr 30 03:28:23.100 INFO Fetch successful Apr 30 03:28:23.101607 coreos-metadata[1641]: Apr 30 03:28:23.101 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 30 03:28:23.114828 coreos-metadata[1641]: Apr 30 03:28:23.113 INFO Fetch successful Apr 30 03:28:23.207385 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:28:23.213675 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:28:23.247614 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1718) Apr 30 03:28:23.442626 locksmithd[1715]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:28:23.451646 sshd_keygen[1684]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:28:23.507995 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:28:23.520902 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:28:23.532778 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 30 03:28:23.556146 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:28:23.556509 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:28:23.570692 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:28:23.587759 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 30 03:28:23.624122 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:28:23.637967 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:28:23.650130 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:28:23.653426 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:28:23.934464 tar[1668]: linux-amd64/LICENSE Apr 30 03:28:23.934706 tar[1668]: linux-amd64/README.md Apr 30 03:28:23.950461 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:28:23.979442 containerd[1674]: time="2025-04-30T03:28:23.979258000Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:28:24.016626 containerd[1674]: time="2025-04-30T03:28:24.015857900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:24.017757 containerd[1674]: time="2025-04-30T03:28:24.017624300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:24.017757 containerd[1674]: time="2025-04-30T03:28:24.017666800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:28:24.017757 containerd[1674]: time="2025-04-30T03:28:24.017688600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:28:24.017928 containerd[1674]: time="2025-04-30T03:28:24.017848800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:28:24.017928 containerd[1674]: time="2025-04-30T03:28:24.017870500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:24.017995 containerd[1674]: time="2025-04-30T03:28:24.017945100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:24.017995 containerd[1674]: time="2025-04-30T03:28:24.017962800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:24.018191 containerd[1674]: time="2025-04-30T03:28:24.018160700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:24.018191 containerd[1674]: time="2025-04-30T03:28:24.018184100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:24.018272 containerd[1674]: time="2025-04-30T03:28:24.018201800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:24.018272 containerd[1674]: time="2025-04-30T03:28:24.018215100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:24.018355 containerd[1674]: time="2025-04-30T03:28:24.018318000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:24.019043 containerd[1674]: time="2025-04-30T03:28:24.018534200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:24.019043 containerd[1674]: time="2025-04-30T03:28:24.018734300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:24.019043 containerd[1674]: time="2025-04-30T03:28:24.018757100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:28:24.019043 containerd[1674]: time="2025-04-30T03:28:24.018861200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:28:24.019043 containerd[1674]: time="2025-04-30T03:28:24.018916800Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.028581000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.028650800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.028681500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.028704200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.028723700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.028863100Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.029168300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.029313700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.029338500Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.029356500Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.029374700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.029393100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.029411700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:28:24.029518 containerd[1674]: time="2025-04-30T03:28:24.029431000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029452700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029469300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029486500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029526100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029554300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029571700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029587600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029617400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029634200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029652100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029669600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029689000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029708300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030066 containerd[1674]: time="2025-04-30T03:28:24.029729800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.029746500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.029809200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.029829800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.029852400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.029882100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.029899400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.029915000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.029983800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.030009000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.030024300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.030042000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.030056000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.030074300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:28:24.030527 containerd[1674]: time="2025-04-30T03:28:24.030087200Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:28:24.031949 containerd[1674]: time="2025-04-30T03:28:24.030101300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:28:24.031992 containerd[1674]: time="2025-04-30T03:28:24.030471900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:28:24.031992 containerd[1674]: time="2025-04-30T03:28:24.030550300Z" level=info msg="Connect containerd service" Apr 30 03:28:24.031992 containerd[1674]: time="2025-04-30T03:28:24.030663200Z" level=info msg="using legacy CRI server" Apr 30 03:28:24.031992 containerd[1674]: time="2025-04-30T03:28:24.030677100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:28:24.031992 containerd[1674]: time="2025-04-30T03:28:24.031252600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:28:24.034740 containerd[1674]: time="2025-04-30T03:28:24.034397100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:28:24.034740 containerd[1674]: time="2025-04-30T03:28:24.034521000Z" level=info msg="Start subscribing containerd event" Apr 30 03:28:24.034740 containerd[1674]: time="2025-04-30T03:28:24.034565300Z" level=info msg="Start recovering state" Apr 30 03:28:24.034740 containerd[1674]: time="2025-04-30T03:28:24.034643000Z" level=info msg="Start event monitor" Apr 30 03:28:24.034740 containerd[1674]: time="2025-04-30T03:28:24.034660200Z" level=info msg="Start snapshots syncer" Apr 30 03:28:24.034740 containerd[1674]: time="2025-04-30T03:28:24.034679100Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:28:24.034740 containerd[1674]: time="2025-04-30T03:28:24.034690900Z" level=info msg="Start streaming server" Apr 30 03:28:24.038918 containerd[1674]: time="2025-04-30T03:28:24.035147100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:28:24.038918 containerd[1674]: time="2025-04-30T03:28:24.035208900Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:28:24.038918 containerd[1674]: time="2025-04-30T03:28:24.036477600Z" level=info msg="containerd successfully booted in 0.058447s" Apr 30 03:28:24.035358 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:28:24.305773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:24.306269 (kubelet)[1800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:24.312376 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:28:24.315740 systemd[1]: Startup finished in 755ms (firmware) + 25.901s (loader) + 1.160s (kernel) + 14.134s (initrd) + 9.915s (userspace) = 51.868s. Apr 30 03:28:24.591995 login[1782]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 03:28:24.596854 login[1783]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 03:28:24.607579 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:28:24.616973 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:28:24.620699 systemd-logind[1656]: New session 1 of user core. Apr 30 03:28:24.630919 systemd-logind[1656]: New session 2 of user core. Apr 30 03:28:24.635685 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:28:24.644322 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:28:24.648686 (systemd)[1812]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:28:24.844899 systemd[1812]: Queued start job for default target default.target. Apr 30 03:28:24.851129 systemd[1812]: Created slice app.slice - User Application Slice. Apr 30 03:28:24.851166 systemd[1812]: Reached target paths.target - Paths. Apr 30 03:28:24.851183 systemd[1812]: Reached target timers.target - Timers. Apr 30 03:28:24.853054 systemd[1812]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:28:24.867320 systemd[1812]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:28:24.867449 systemd[1812]: Reached target sockets.target - Sockets. Apr 30 03:28:24.867466 systemd[1812]: Reached target basic.target - Basic System. Apr 30 03:28:24.867506 systemd[1812]: Reached target default.target - Main User Target. Apr 30 03:28:24.867540 systemd[1812]: Startup finished in 212ms. Apr 30 03:28:24.868023 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:28:24.876764 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:28:24.877669 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:28:25.054838 kubelet[1800]: E0430 03:28:25.054769 1800 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:25.057162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:25.057347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:25.232884 waagent[1780]: 2025-04-30T03:28:25.232733Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 30 03:28:25.267841 waagent[1780]: 2025-04-30T03:28:25.233983Z INFO Daemon Daemon OS: flatcar 4081.3.3 Apr 30 03:28:25.267841 waagent[1780]: 2025-04-30T03:28:25.234905Z INFO Daemon Daemon Python: 3.11.9 Apr 30 03:28:25.267841 waagent[1780]: 2025-04-30T03:28:25.236151Z INFO Daemon Daemon Run daemon Apr 30 03:28:25.267841 waagent[1780]: 2025-04-30T03:28:25.237113Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.3' Apr 30 03:28:25.267841 waagent[1780]: 2025-04-30T03:28:25.237469Z INFO Daemon Daemon Using waagent for provisioning Apr 30 03:28:25.267841 waagent[1780]: 2025-04-30T03:28:25.238037Z INFO Daemon Daemon Activate resource disk Apr 30 03:28:25.267841 waagent[1780]: 2025-04-30T03:28:25.239149Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 30 03:28:25.267841 waagent[1780]: 2025-04-30T03:28:25.243641Z INFO Daemon Daemon Found device: None Apr 30 03:28:25.267841 waagent[1780]: 2025-04-30T03:28:25.244356Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 30 03:28:25.267841 waagent[1780]: 2025-04-30T03:28:25.245212Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 30 03:28:25.267841 waagent[1780]: 2025-04-30T03:28:25.247551Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 03:28:25.267841 waagent[1780]: 2025-04-30T03:28:25.248341Z INFO Daemon Daemon Running default provisioning handler Apr 30 03:28:25.303099 waagent[1780]: 2025-04-30T03:28:25.302988Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 30 03:28:25.318769 waagent[1780]: 2025-04-30T03:28:25.305126Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 30 03:28:25.318769 waagent[1780]: 2025-04-30T03:28:25.305765Z INFO Daemon Daemon cloud-init is enabled: False Apr 30 03:28:25.318769 waagent[1780]: 2025-04-30T03:28:25.306670Z INFO Daemon Daemon Copying ovf-env.xml Apr 30 03:28:25.386003 waagent[1780]: 2025-04-30T03:28:25.384522Z INFO Daemon Daemon Successfully mounted dvd Apr 30 03:28:25.412647 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 30 03:28:25.414366 waagent[1780]: 2025-04-30T03:28:25.414042Z INFO Daemon Daemon Detect protocol endpoint Apr 30 03:28:25.416736 waagent[1780]: 2025-04-30T03:28:25.416679Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 03:28:25.428894 waagent[1780]: 2025-04-30T03:28:25.418267Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 30 03:28:25.428894 waagent[1780]: 2025-04-30T03:28:25.419071Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 30 03:28:25.428894 waagent[1780]: 2025-04-30T03:28:25.419626Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 30 03:28:25.428894 waagent[1780]: 2025-04-30T03:28:25.420413Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 30 03:28:25.454619 waagent[1780]: 2025-04-30T03:28:25.453568Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 30 03:28:25.460455 waagent[1780]: 2025-04-30T03:28:25.454927Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 30 03:28:25.460455 waagent[1780]: 2025-04-30T03:28:25.455432Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 30 03:28:25.540107 waagent[1780]: 2025-04-30T03:28:25.539945Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 30 03:28:25.544251 waagent[1780]: 2025-04-30T03:28:25.544173Z INFO Daemon Daemon Forcing an update of the goal state. Apr 30 03:28:25.550314 waagent[1780]: 2025-04-30T03:28:25.550252Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 03:28:25.564463 waagent[1780]: 2025-04-30T03:28:25.564413Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Apr 30 03:28:25.578338 waagent[1780]: 2025-04-30T03:28:25.566082Z INFO Daemon Apr 30 03:28:25.578338 waagent[1780]: 2025-04-30T03:28:25.567943Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e776f7fe-f011-489e-b0b1-5d8a6cb7cf04 eTag: 2130259090290962569 source: Fabric] Apr 30 03:28:25.578338 waagent[1780]: 2025-04-30T03:28:25.569420Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 30 03:28:25.578338 waagent[1780]: 2025-04-30T03:28:25.572310Z INFO Daemon Apr 30 03:28:25.578338 waagent[1780]: 2025-04-30T03:28:25.573203Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 30 03:28:25.578338 waagent[1780]: 2025-04-30T03:28:25.578269Z INFO Daemon Daemon Downloading artifacts profile blob Apr 30 03:28:25.656413 waagent[1780]: 2025-04-30T03:28:25.656342Z INFO Daemon Downloaded certificate {'thumbprint': 'FD4875753147655A21B1C59A631C1E1641DF6D70', 'hasPrivateKey': True} Apr 30 03:28:25.661248 waagent[1780]: 2025-04-30T03:28:25.661189Z INFO Daemon Downloaded certificate {'thumbprint': '5BD6FC263F66C664054444D12539E12DFC868BD1', 'hasPrivateKey': False} Apr 30 03:28:25.667473 waagent[1780]: 2025-04-30T03:28:25.662497Z INFO Daemon Fetch goal state completed Apr 30 03:28:25.672273 waagent[1780]: 2025-04-30T03:28:25.672221Z INFO Daemon Daemon Starting provisioning Apr 30 03:28:25.678753 waagent[1780]: 2025-04-30T03:28:25.673295Z INFO Daemon Daemon Handle ovf-env.xml. Apr 30 03:28:25.678753 waagent[1780]: 2025-04-30T03:28:25.674250Z INFO Daemon Daemon Set hostname [ci-4081.3.3-a-e2728433b6] Apr 30 03:28:25.689283 waagent[1780]: 2025-04-30T03:28:25.689229Z INFO Daemon Daemon Publish hostname [ci-4081.3.3-a-e2728433b6] Apr 30 03:28:25.691774 waagent[1780]: 2025-04-30T03:28:25.690711Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 30 03:28:25.691774 waagent[1780]: 2025-04-30T03:28:25.691543Z INFO Daemon Daemon Primary interface is [eth0] Apr 30 03:28:25.715242 systemd-networkd[1453]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:25.715252 systemd-networkd[1453]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:25.715300 systemd-networkd[1453]: eth0: DHCP lease lost Apr 30 03:28:25.716726 waagent[1780]: 2025-04-30T03:28:25.716534Z INFO Daemon Daemon Create user account if not exists Apr 30 03:28:25.732494 waagent[1780]: 2025-04-30T03:28:25.718283Z INFO Daemon Daemon User core already exists, skip useradd Apr 30 03:28:25.732494 waagent[1780]: 2025-04-30T03:28:25.719071Z INFO Daemon Daemon Configure sudoer Apr 30 03:28:25.732494 waagent[1780]: 2025-04-30T03:28:25.720163Z INFO Daemon Daemon Configure sshd Apr 30 03:28:25.732494 waagent[1780]: 2025-04-30T03:28:25.720902Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 30 03:28:25.732494 waagent[1780]: 2025-04-30T03:28:25.721540Z INFO Daemon Daemon Deploy ssh public key. Apr 30 03:28:25.732763 systemd-networkd[1453]: eth0: DHCPv6 lease lost Apr 30 03:28:25.769639 systemd-networkd[1453]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:28:26.835606 waagent[1780]: 2025-04-30T03:28:26.835522Z INFO Daemon Daemon Provisioning complete Apr 30 03:28:26.850534 waagent[1780]: 2025-04-30T03:28:26.850474Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 30 03:28:26.858477 waagent[1780]: 2025-04-30T03:28:26.853467Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 30 03:28:26.858477 waagent[1780]: 2025-04-30T03:28:26.854671Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 30 03:28:26.977226 waagent[1871]: 2025-04-30T03:28:26.977138Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 30 03:28:26.977639 waagent[1871]: 2025-04-30T03:28:26.977284Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.3 Apr 30 03:28:26.977639 waagent[1871]: 2025-04-30T03:28:26.977365Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 30 03:28:27.012161 waagent[1871]: 2025-04-30T03:28:27.012074Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 30 03:28:27.012383 waagent[1871]: 2025-04-30T03:28:27.012328Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 03:28:27.012484 waagent[1871]: 2025-04-30T03:28:27.012443Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 03:28:27.020425 waagent[1871]: 2025-04-30T03:28:27.020361Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 03:28:27.026038 waagent[1871]: 2025-04-30T03:28:27.025988Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Apr 30 03:28:27.026459 waagent[1871]: 2025-04-30T03:28:27.026404Z INFO ExtHandler Apr 30 03:28:27.026541 waagent[1871]: 2025-04-30T03:28:27.026494Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2096a6b0-be59-4444-8711-49f1acbc0eb5 eTag: 2130259090290962569 source: Fabric] Apr 30 03:28:27.026864 waagent[1871]: 2025-04-30T03:28:27.026813Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 30 03:28:27.027391 waagent[1871]: 2025-04-30T03:28:27.027336Z INFO ExtHandler Apr 30 03:28:27.027478 waagent[1871]: 2025-04-30T03:28:27.027417Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 30 03:28:27.031761 waagent[1871]: 2025-04-30T03:28:27.031721Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 30 03:28:27.101778 waagent[1871]: 2025-04-30T03:28:27.101647Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FD4875753147655A21B1C59A631C1E1641DF6D70', 'hasPrivateKey': True} Apr 30 03:28:27.102150 waagent[1871]: 2025-04-30T03:28:27.102093Z INFO ExtHandler Downloaded certificate {'thumbprint': '5BD6FC263F66C664054444D12539E12DFC868BD1', 'hasPrivateKey': False} Apr 30 03:28:27.102574 waagent[1871]: 2025-04-30T03:28:27.102525Z INFO ExtHandler Fetch goal state completed Apr 30 03:28:27.117194 waagent[1871]: 2025-04-30T03:28:27.117135Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1871 Apr 30 03:28:27.117343 waagent[1871]: 2025-04-30T03:28:27.117296Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 30 03:28:27.118881 waagent[1871]: 2025-04-30T03:28:27.118825Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.3', '', 'Flatcar Container Linux by Kinvolk'] Apr 30 03:28:27.119260 waagent[1871]: 2025-04-30T03:28:27.119211Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 30 03:28:27.151124 waagent[1871]: 2025-04-30T03:28:27.151085Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 30 03:28:27.151311 waagent[1871]: 2025-04-30T03:28:27.151267Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 30 03:28:27.157572 waagent[1871]: 2025-04-30T03:28:27.157476Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 30 03:28:27.164015 systemd[1]: Reloading requested from client PID 1886 ('systemctl') (unit waagent.service)... Apr 30 03:28:27.164031 systemd[1]: Reloading... Apr 30 03:28:27.250622 zram_generator::config[1921]: No configuration found. Apr 30 03:28:27.364930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:27.445238 systemd[1]: Reloading finished in 280 ms. Apr 30 03:28:27.470184 waagent[1871]: 2025-04-30T03:28:27.469730Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 30 03:28:27.478292 systemd[1]: Reloading requested from client PID 1977 ('systemctl') (unit waagent.service)... Apr 30 03:28:27.478309 systemd[1]: Reloading... Apr 30 03:28:27.564668 zram_generator::config[2007]: No configuration found. Apr 30 03:28:27.685170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:27.765701 systemd[1]: Reloading finished in 286 ms. Apr 30 03:28:27.790220 waagent[1871]: 2025-04-30T03:28:27.788666Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 30 03:28:27.790220 waagent[1871]: 2025-04-30T03:28:27.788876Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 30 03:28:29.015812 waagent[1871]: 2025-04-30T03:28:29.015719Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 30 03:28:29.016480 waagent[1871]: 2025-04-30T03:28:29.016413Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 30 03:28:29.017240 waagent[1871]: 2025-04-30T03:28:29.017178Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 30 03:28:29.017385 waagent[1871]: 2025-04-30T03:28:29.017319Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 03:28:29.017705 waagent[1871]: 2025-04-30T03:28:29.017650Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 03:28:29.017904 waagent[1871]: 2025-04-30T03:28:29.017859Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 30 03:28:29.018351 waagent[1871]: 2025-04-30T03:28:29.018300Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 30 03:28:29.018606 waagent[1871]: 2025-04-30T03:28:29.018544Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 03:28:29.018710 waagent[1871]: 2025-04-30T03:28:29.018656Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 30 03:28:29.018785 waagent[1871]: 2025-04-30T03:28:29.018733Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 30 03:28:29.019381 waagent[1871]: 2025-04-30T03:28:29.019322Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 30 03:28:29.019455 waagent[1871]: 2025-04-30T03:28:29.019381Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 30 03:28:29.019455 waagent[1871]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 30 03:28:29.019455 waagent[1871]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Apr 30 03:28:29.019455 waagent[1871]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 30 03:28:29.019455 waagent[1871]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 30 03:28:29.019455 waagent[1871]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 03:28:29.019455 waagent[1871]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 03:28:29.020202 waagent[1871]: 2025-04-30T03:28:29.019919Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 30 03:28:29.020266 waagent[1871]: 2025-04-30T03:28:29.020175Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 30 03:28:29.021154 waagent[1871]: 2025-04-30T03:28:29.021115Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 03:28:29.022041 waagent[1871]: 2025-04-30T03:28:29.021977Z INFO EnvHandler ExtHandler Configure routes Apr 30 03:28:29.022388 waagent[1871]: 2025-04-30T03:28:29.022349Z INFO EnvHandler ExtHandler Gateway:None Apr 30 03:28:29.022840 waagent[1871]: 2025-04-30T03:28:29.022653Z INFO EnvHandler ExtHandler Routes:None Apr 30 03:28:29.027309 waagent[1871]: 2025-04-30T03:28:29.027234Z INFO ExtHandler ExtHandler Apr 30 03:28:29.028927 waagent[1871]: 2025-04-30T03:28:29.028863Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: bf4fc425-7405-4f6a-af06-404fe8d79424 correlation 1b3cc737-7f21-486c-bf13-8a2fe3db92e6 created: 2025-04-30T03:27:20.469719Z] Apr 30 03:28:29.030245 waagent[1871]: 2025-04-30T03:28:29.030196Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 30 03:28:29.032978 waagent[1871]: 2025-04-30T03:28:29.032772Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 5 ms] Apr 30 03:28:29.062196 waagent[1871]: 2025-04-30T03:28:29.062133Z INFO MonitorHandler ExtHandler Network interfaces: Apr 30 03:28:29.062196 waagent[1871]: Executing ['ip', '-a', '-o', 'link']: Apr 30 03:28:29.062196 waagent[1871]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 30 03:28:29.062196 waagent[1871]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:df:7b:f7 brd ff:ff:ff:ff:ff:ff Apr 30 03:28:29.062196 waagent[1871]: 3: enP54414s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:df:7b:f7 brd ff:ff:ff:ff:ff:ff\ altname enP54414p0s2 Apr 30 03:28:29.062196 waagent[1871]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 30 03:28:29.062196 waagent[1871]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 30 03:28:29.062196 waagent[1871]: 2: eth0 inet 10.200.8.4/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 30 03:28:29.062196 waagent[1871]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 30 03:28:29.062196 waagent[1871]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 30 03:28:29.062196 waagent[1871]: 2: eth0 inet6 fe80::6245:bdff:fedf:7bf7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 03:28:29.062196 waagent[1871]: 3: enP54414s1 inet6 fe80::6245:bdff:fedf:7bf7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 03:28:29.078626 waagent[1871]: 2025-04-30T03:28:29.078542Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 60FE87DF-656F-4E21-B490-B3F9E6910604;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 30 03:28:29.153926 waagent[1871]: 2025-04-30T03:28:29.153850Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 30 03:28:29.153926 waagent[1871]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:29.153926 waagent[1871]: pkts bytes target prot opt in out source destination Apr 30 03:28:29.153926 waagent[1871]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:29.153926 waagent[1871]: pkts bytes target prot opt in out source destination Apr 30 03:28:29.153926 waagent[1871]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:29.153926 waagent[1871]: pkts bytes target prot opt in out source destination Apr 30 03:28:29.153926 waagent[1871]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 03:28:29.153926 waagent[1871]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 03:28:29.153926 waagent[1871]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 03:28:29.157202 waagent[1871]: 2025-04-30T03:28:29.157142Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 30 03:28:29.157202 waagent[1871]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:29.157202 waagent[1871]: pkts bytes target prot opt in out source destination Apr 30 03:28:29.157202 waagent[1871]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:29.157202 waagent[1871]: pkts bytes target prot opt in out source destination Apr 30 03:28:29.157202 waagent[1871]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:29.157202 waagent[1871]: pkts bytes target prot opt in out source destination Apr 30 03:28:29.157202 waagent[1871]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 03:28:29.157202 waagent[1871]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 03:28:29.157202 waagent[1871]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 03:28:29.157605 waagent[1871]: 2025-04-30T03:28:29.157437Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 30 03:28:35.308210 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:28:35.313848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:35.417043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:35.429907 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:36.029073 kubelet[2109]: E0430 03:28:36.028992 2109 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:36.032834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:36.033042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:37.792294 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:28:37.798895 systemd[1]: Started sshd@0-10.200.8.4:22-10.200.16.10:56836.service - OpenSSH per-connection server daemon (10.200.16.10:56836). Apr 30 03:28:38.473517 sshd[2118]: Accepted publickey for core from 10.200.16.10 port 56836 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:38.475043 sshd[2118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:38.479297 systemd-logind[1656]: New session 3 of user core. Apr 30 03:28:38.489750 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:28:39.026413 systemd[1]: Started sshd@1-10.200.8.4:22-10.200.16.10:58846.service - OpenSSH per-connection server daemon (10.200.16.10:58846). Apr 30 03:28:39.650671 sshd[2123]: Accepted publickey for core from 10.200.16.10 port 58846 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:39.652150 sshd[2123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:39.656097 systemd-logind[1656]: New session 4 of user core. Apr 30 03:28:39.665076 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:28:40.096368 sshd[2123]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:40.100427 systemd[1]: sshd@1-10.200.8.4:22-10.200.16.10:58846.service: Deactivated successfully. Apr 30 03:28:40.102523 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:28:40.103341 systemd-logind[1656]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:28:40.104471 systemd-logind[1656]: Removed session 4. Apr 30 03:28:40.212959 systemd[1]: Started sshd@2-10.200.8.4:22-10.200.16.10:58852.service - OpenSSH per-connection server daemon (10.200.16.10:58852). Apr 30 03:28:40.835100 sshd[2130]: Accepted publickey for core from 10.200.16.10 port 58852 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:40.836653 sshd[2130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:40.840564 systemd-logind[1656]: New session 5 of user core. Apr 30 03:28:40.850751 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:28:41.277550 sshd[2130]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:41.280782 systemd[1]: sshd@2-10.200.8.4:22-10.200.16.10:58852.service: Deactivated successfully. Apr 30 03:28:41.282785 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:28:41.284197 systemd-logind[1656]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:28:41.285184 systemd-logind[1656]: Removed session 5. Apr 30 03:28:41.387476 systemd[1]: Started sshd@3-10.200.8.4:22-10.200.16.10:58854.service - OpenSSH per-connection server daemon (10.200.16.10:58854). Apr 30 03:28:42.012810 sshd[2137]: Accepted publickey for core from 10.200.16.10 port 58854 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:42.014303 sshd[2137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:42.018428 systemd-logind[1656]: New session 6 of user core. Apr 30 03:28:42.025762 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:28:42.464666 sshd[2137]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:42.469901 systemd[1]: sshd@3-10.200.8.4:22-10.200.16.10:58854.service: Deactivated successfully. Apr 30 03:28:42.472635 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:28:42.473583 systemd-logind[1656]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:28:42.475027 systemd-logind[1656]: Removed session 6. Apr 30 03:28:42.578736 systemd[1]: Started sshd@4-10.200.8.4:22-10.200.16.10:58866.service - OpenSSH per-connection server daemon (10.200.16.10:58866). Apr 30 03:28:43.203211 sshd[2144]: Accepted publickey for core from 10.200.16.10 port 58866 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:43.205001 sshd[2144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:43.210503 systemd-logind[1656]: New session 7 of user core. Apr 30 03:28:43.219756 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:28:43.729913 sudo[2147]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:28:43.730265 sudo[2147]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:43.754885 sudo[2147]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:43.856775 sshd[2144]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:43.860636 systemd[1]: sshd@4-10.200.8.4:22-10.200.16.10:58866.service: Deactivated successfully. Apr 30 03:28:43.862878 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:28:43.864339 systemd-logind[1656]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:28:43.865362 systemd-logind[1656]: Removed session 7. Apr 30 03:28:43.969783 systemd[1]: Started sshd@5-10.200.8.4:22-10.200.16.10:58870.service - OpenSSH per-connection server daemon (10.200.16.10:58870). Apr 30 03:28:44.596174 sshd[2152]: Accepted publickey for core from 10.200.16.10 port 58870 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:44.597734 sshd[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:44.602208 systemd-logind[1656]: New session 8 of user core. Apr 30 03:28:44.608740 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:28:44.939732 sudo[2156]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:28:44.940087 sudo[2156]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:44.943552 sudo[2156]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:44.948423 sudo[2155]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:28:44.948825 sudo[2155]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:44.960921 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:44.962874 auditctl[2159]: No rules Apr 30 03:28:44.963231 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:28:44.963438 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:44.965980 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:44.992177 augenrules[2177]: No rules Apr 30 03:28:44.993671 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:44.995473 sudo[2155]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:45.098445 sshd[2152]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:45.101563 systemd[1]: sshd@5-10.200.8.4:22-10.200.16.10:58870.service: Deactivated successfully. Apr 30 03:28:45.103774 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:28:45.105356 systemd-logind[1656]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:28:45.106373 systemd-logind[1656]: Removed session 8. Apr 30 03:28:45.209735 systemd[1]: Started sshd@6-10.200.8.4:22-10.200.16.10:58882.service - OpenSSH per-connection server daemon (10.200.16.10:58882). Apr 30 03:28:45.832420 sshd[2185]: Accepted publickey for core from 10.200.16.10 port 58882 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:45.834110 sshd[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:45.838538 systemd-logind[1656]: New session 9 of user core. Apr 30 03:28:45.849868 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:28:46.178729 sudo[2188]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:28:46.179869 sudo[2188]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:46.181339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:28:46.191054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:46.617324 chronyd[1675]: Selected source PHC0 Apr 30 03:28:46.899501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:46.905858 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:46.957542 kubelet[2201]: E0430 03:28:46.957503 2201 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:46.960899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:46.961105 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:48.035889 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:28:48.038361 (dockerd)[2219]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:28:49.610848 dockerd[2219]: time="2025-04-30T03:28:49.610562407Z" level=info msg="Starting up" Apr 30 03:28:49.984682 dockerd[2219]: time="2025-04-30T03:28:49.984551707Z" level=info msg="Loading containers: start." Apr 30 03:28:50.186620 kernel: Initializing XFRM netlink socket Apr 30 03:28:50.289343 systemd-networkd[1453]: docker0: Link UP Apr 30 03:28:50.306812 dockerd[2219]: time="2025-04-30T03:28:50.306777307Z" level=info msg="Loading containers: done." Apr 30 03:28:50.361189 dockerd[2219]: time="2025-04-30T03:28:50.361136007Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:28:50.361425 dockerd[2219]: time="2025-04-30T03:28:50.361301607Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:28:50.361503 dockerd[2219]: time="2025-04-30T03:28:50.361434407Z" level=info msg="Daemon has completed initialization" Apr 30 03:28:50.409776 dockerd[2219]: time="2025-04-30T03:28:50.409706907Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:28:50.410185 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:28:51.985395 containerd[1674]: time="2025-04-30T03:28:51.985346507Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:28:52.646002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322184841.mount: Deactivated successfully. Apr 30 03:28:54.228784 containerd[1674]: time="2025-04-30T03:28:54.228734107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:54.233623 containerd[1674]: time="2025-04-30T03:28:54.233383907Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674881" Apr 30 03:28:54.235778 containerd[1674]: time="2025-04-30T03:28:54.235689007Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:54.239503 containerd[1674]: time="2025-04-30T03:28:54.239451307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:54.240693 containerd[1674]: time="2025-04-30T03:28:54.240447807Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.2550588s" Apr 30 03:28:54.240693 containerd[1674]: time="2025-04-30T03:28:54.240493207Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:28:54.261694 containerd[1674]: time="2025-04-30T03:28:54.261664007Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:28:55.872116 containerd[1674]: time="2025-04-30T03:28:55.872062201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:55.875046 containerd[1674]: time="2025-04-30T03:28:55.874981593Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617542" Apr 30 03:28:55.879292 containerd[1674]: time="2025-04-30T03:28:55.879241627Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:55.884483 containerd[1674]: time="2025-04-30T03:28:55.884431691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:55.886234 containerd[1674]: time="2025-04-30T03:28:55.885435923Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.623737315s" Apr 30 03:28:55.886234 containerd[1674]: time="2025-04-30T03:28:55.885476224Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:28:55.906689 containerd[1674]: time="2025-04-30T03:28:55.906659591Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:28:56.985887 containerd[1674]: time="2025-04-30T03:28:56.985832997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:56.987659 containerd[1674]: time="2025-04-30T03:28:56.987605253Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903690" Apr 30 03:28:56.994532 containerd[1674]: time="2025-04-30T03:28:56.994477269Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:56.999209 containerd[1674]: time="2025-04-30T03:28:56.999157717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:57.000334 containerd[1674]: time="2025-04-30T03:28:57.000165349Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.093472156s" Apr 30 03:28:57.000334 containerd[1674]: time="2025-04-30T03:28:57.000204750Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:28:57.022372 containerd[1674]: time="2025-04-30T03:28:57.022341747Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:28:57.162980 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 03:28:57.169815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:57.263425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:57.267875 (kubelet)[2442]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:57.796616 kubelet[2442]: E0430 03:28:57.796497 2442 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:57.798910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:57.799122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:58.626104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804099497.mount: Deactivated successfully. Apr 30 03:28:59.079607 containerd[1674]: time="2025-04-30T03:28:59.079537552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:59.082476 containerd[1674]: time="2025-04-30T03:28:59.082415480Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185825" Apr 30 03:28:59.085132 containerd[1674]: time="2025-04-30T03:28:59.085077406Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:59.088561 containerd[1674]: time="2025-04-30T03:28:59.088527340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:59.089649 containerd[1674]: time="2025-04-30T03:28:59.089121246Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.066731798s" Apr 30 03:28:59.089649 containerd[1674]: time="2025-04-30T03:28:59.089165746Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:28:59.109339 containerd[1674]: time="2025-04-30T03:28:59.109304143Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:28:59.602521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3541198844.mount: Deactivated successfully. Apr 30 03:29:00.697224 containerd[1674]: time="2025-04-30T03:29:00.697161150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:00.700081 containerd[1674]: time="2025-04-30T03:29:00.700021478Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Apr 30 03:29:00.703076 containerd[1674]: time="2025-04-30T03:29:00.703020807Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:00.707449 containerd[1674]: time="2025-04-30T03:29:00.707419850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:00.708855 containerd[1674]: time="2025-04-30T03:29:00.708445360Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.599099117s" Apr 30 03:29:00.708855 containerd[1674]: time="2025-04-30T03:29:00.708487361Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:29:00.729059 containerd[1674]: time="2025-04-30T03:29:00.729024561Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:29:01.220263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3646751730.mount: Deactivated successfully. Apr 30 03:29:01.237128 containerd[1674]: time="2025-04-30T03:29:01.237046422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:01.252167 containerd[1674]: time="2025-04-30T03:29:01.252089769Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Apr 30 03:29:01.255301 containerd[1674]: time="2025-04-30T03:29:01.255237400Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:01.261195 containerd[1674]: time="2025-04-30T03:29:01.261132758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:01.262038 containerd[1674]: time="2025-04-30T03:29:01.261837465Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 532.773904ms" Apr 30 03:29:01.262038 containerd[1674]: time="2025-04-30T03:29:01.261875265Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:29:01.288470 containerd[1674]: time="2025-04-30T03:29:01.288273223Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:29:01.840313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3813641057.mount: Deactivated successfully. Apr 30 03:29:04.030963 containerd[1674]: time="2025-04-30T03:29:04.030908067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:04.032772 containerd[1674]: time="2025-04-30T03:29:04.032713285Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Apr 30 03:29:04.036010 containerd[1674]: time="2025-04-30T03:29:04.035951617Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:04.039946 containerd[1674]: time="2025-04-30T03:29:04.039880155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:04.041081 containerd[1674]: time="2025-04-30T03:29:04.040917165Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.752606342s" Apr 30 03:29:04.041081 containerd[1674]: time="2025-04-30T03:29:04.040957866Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:29:07.421627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:07.428098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:07.450036 systemd[1]: Reloading requested from client PID 2629 ('systemctl') (unit session-9.scope)... Apr 30 03:29:07.450054 systemd[1]: Reloading... Apr 30 03:29:07.584460 zram_generator::config[2672]: No configuration found. Apr 30 03:29:07.623624 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Apr 30 03:29:07.698218 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:29:07.779369 systemd[1]: Reloading finished in 328 ms. Apr 30 03:29:07.852438 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:29:07.852570 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:29:07.852920 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:07.861040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:08.107315 update_engine[1659]: I20250430 03:29:08.106752 1659 update_attempter.cc:509] Updating boot flags... Apr 30 03:29:08.389306 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2744) Apr 30 03:29:08.958628 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2744) Apr 30 03:29:09.098646 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2744) Apr 30 03:29:09.458375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:09.467161 (kubelet)[2827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:29:09.512223 kubelet[2827]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:09.512223 kubelet[2827]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:29:09.512223 kubelet[2827]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:09.512223 kubelet[2827]: I0430 03:29:09.511581 2827 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:29:10.596629 kubelet[2827]: I0430 03:29:10.595880 2827 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:29:10.596629 kubelet[2827]: I0430 03:29:10.596102 2827 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:29:10.597488 kubelet[2827]: I0430 03:29:10.596951 2827 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:29:10.659071 kubelet[2827]: I0430 03:29:10.658502 2827 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:29:10.659520 kubelet[2827]: E0430 03:29:10.659474 2827 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:10.668886 kubelet[2827]: I0430 03:29:10.668853 2827 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:29:10.670115 kubelet[2827]: I0430 03:29:10.670064 2827 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:29:10.670302 kubelet[2827]: I0430 03:29:10.670113 2827 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-e2728433b6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:29:10.670461 kubelet[2827]: I0430 03:29:10.670316 2827 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:29:10.670461 kubelet[2827]: I0430 03:29:10.670330 2827 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:29:10.670555 kubelet[2827]: I0430 03:29:10.670475 2827 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:10.671249 kubelet[2827]: I0430 03:29:10.671229 2827 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:29:10.671331 kubelet[2827]: I0430 03:29:10.671259 2827 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:29:10.671331 kubelet[2827]: I0430 03:29:10.671304 2827 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:29:10.671331 kubelet[2827]: I0430 03:29:10.671322 2827 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:29:10.677479 kubelet[2827]: W0430 03:29:10.676712 2827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:10.677479 kubelet[2827]: E0430 03:29:10.676778 2827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:10.677479 kubelet[2827]: W0430 03:29:10.676844 2827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-e2728433b6&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:10.677479 kubelet[2827]: E0430 03:29:10.676881 2827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-e2728433b6&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:10.677479 kubelet[2827]: I0430 03:29:10.677255 2827 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:29:10.679917 kubelet[2827]: I0430 03:29:10.678864 2827 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:29:10.679917 kubelet[2827]: W0430 03:29:10.678924 2827 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:29:10.679917 kubelet[2827]: I0430 03:29:10.679784 2827 server.go:1264] "Started kubelet" Apr 30 03:29:10.681544 kubelet[2827]: I0430 03:29:10.681411 2827 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:29:10.682620 kubelet[2827]: I0430 03:29:10.682550 2827 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:29:10.685497 kubelet[2827]: I0430 03:29:10.685414 2827 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:29:10.701083 kubelet[2827]: I0430 03:29:10.700727 2827 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:29:10.701083 kubelet[2827]: I0430 03:29:10.700933 2827 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:29:10.701083 kubelet[2827]: E0430 03:29:10.700934 2827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-a-e2728433b6.183afaf87adf076c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-e2728433b6,UID:ci-4081.3.3-a-e2728433b6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-e2728433b6,},FirstTimestamp:2025-04-30 03:29:10.6797587 +0000 UTC m=+1.205914545,LastTimestamp:2025-04-30 03:29:10.6797587 +0000 UTC m=+1.205914545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-e2728433b6,}" Apr 30 03:29:10.704124 kubelet[2827]: I0430 03:29:10.703311 2827 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:29:10.704124 kubelet[2827]: I0430 03:29:10.703676 2827 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:29:10.704124 kubelet[2827]: I0430 03:29:10.703733 2827 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:29:10.706553 kubelet[2827]: W0430 03:29:10.705737 2827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:10.706553 kubelet[2827]: E0430 03:29:10.705798 2827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:10.706553 kubelet[2827]: E0430 03:29:10.705863 2827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-e2728433b6?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="200ms" Apr 30 03:29:10.707923 kubelet[2827]: I0430 03:29:10.707898 2827 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:29:10.708138 kubelet[2827]: I0430 03:29:10.708114 2827 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:29:10.710521 kubelet[2827]: I0430 03:29:10.710495 2827 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:29:10.726618 kubelet[2827]: I0430 03:29:10.726454 2827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:29:10.727952 kubelet[2827]: I0430 03:29:10.727928 2827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:29:10.728345 kubelet[2827]: I0430 03:29:10.728075 2827 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:29:10.728345 kubelet[2827]: I0430 03:29:10.728102 2827 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:29:10.728345 kubelet[2827]: E0430 03:29:10.728152 2827 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:29:10.736528 kubelet[2827]: W0430 03:29:10.736484 2827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:10.736652 kubelet[2827]: E0430 03:29:10.736535 2827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:10.752363 kubelet[2827]: I0430 03:29:10.752330 2827 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:29:10.752363 kubelet[2827]: I0430 03:29:10.752347 2827 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:29:10.752363 kubelet[2827]: I0430 03:29:10.752370 2827 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:10.760330 kubelet[2827]: I0430 03:29:10.760301 2827 policy_none.go:49] "None policy: Start" Apr 30 03:29:10.760988 kubelet[2827]: I0430 03:29:10.760959 2827 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:29:10.761080 kubelet[2827]: I0430 03:29:10.760997 2827 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:29:10.768873 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:29:10.777387 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:29:10.780918 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:29:10.788997 kubelet[2827]: I0430 03:29:10.788971 2827 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:29:10.789360 kubelet[2827]: I0430 03:29:10.789308 2827 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:29:10.789710 kubelet[2827]: I0430 03:29:10.789691 2827 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:29:10.791929 kubelet[2827]: E0430 03:29:10.791909 2827 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-a-e2728433b6\" not found" Apr 30 03:29:10.805778 kubelet[2827]: I0430 03:29:10.805748 2827 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:10.806154 kubelet[2827]: E0430 03:29:10.806128 2827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:10.828494 kubelet[2827]: I0430 03:29:10.828427 2827 topology_manager.go:215] "Topology Admit Handler" podUID="b8d2e9981d09d20207636dec565f7559" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:10.830225 kubelet[2827]: I0430 03:29:10.830198 2827 topology_manager.go:215] "Topology Admit Handler" podUID="f539cbe465ba94f2f1bcb760a24b81e2" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:10.831648 kubelet[2827]: I0430 03:29:10.831627 2827 topology_manager.go:215] "Topology Admit Handler" podUID="d5edd97421b19c079763401335d0222e" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:10.838765 systemd[1]: Created slice kubepods-burstable-podb8d2e9981d09d20207636dec565f7559.slice - libcontainer container kubepods-burstable-podb8d2e9981d09d20207636dec565f7559.slice. Apr 30 03:29:10.849517 systemd[1]: Created slice kubepods-burstable-podf539cbe465ba94f2f1bcb760a24b81e2.slice - libcontainer container kubepods-burstable-podf539cbe465ba94f2f1bcb760a24b81e2.slice. Apr 30 03:29:10.860144 systemd[1]: Created slice kubepods-burstable-podd5edd97421b19c079763401335d0222e.slice - libcontainer container kubepods-burstable-podd5edd97421b19c079763401335d0222e.slice. Apr 30 03:29:10.906412 kubelet[2827]: E0430 03:29:10.906357 2827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-e2728433b6?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="400ms" Apr 30 03:29:11.005820 kubelet[2827]: I0430 03:29:11.005762 2827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f539cbe465ba94f2f1bcb760a24b81e2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-e2728433b6\" (UID: \"f539cbe465ba94f2f1bcb760a24b81e2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:11.005820 kubelet[2827]: I0430 03:29:11.005814 2827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f539cbe465ba94f2f1bcb760a24b81e2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-e2728433b6\" (UID: \"f539cbe465ba94f2f1bcb760a24b81e2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:11.006039 kubelet[2827]: I0430 03:29:11.005838 2827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5edd97421b19c079763401335d0222e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-e2728433b6\" (UID: \"d5edd97421b19c079763401335d0222e\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:11.006039 kubelet[2827]: I0430 03:29:11.005858 2827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8d2e9981d09d20207636dec565f7559-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-e2728433b6\" (UID: \"b8d2e9981d09d20207636dec565f7559\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:11.006039 kubelet[2827]: I0430 03:29:11.005878 2827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8d2e9981d09d20207636dec565f7559-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-e2728433b6\" (UID: \"b8d2e9981d09d20207636dec565f7559\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:11.006039 kubelet[2827]: I0430 03:29:11.005914 2827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f539cbe465ba94f2f1bcb760a24b81e2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-e2728433b6\" (UID: \"f539cbe465ba94f2f1bcb760a24b81e2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:11.006039 kubelet[2827]: I0430 03:29:11.005936 2827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f539cbe465ba94f2f1bcb760a24b81e2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-e2728433b6\" (UID: \"f539cbe465ba94f2f1bcb760a24b81e2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:11.006191 kubelet[2827]: I0430 03:29:11.005961 2827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8d2e9981d09d20207636dec565f7559-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-e2728433b6\" (UID: \"b8d2e9981d09d20207636dec565f7559\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:11.006191 kubelet[2827]: I0430 03:29:11.005986 2827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f539cbe465ba94f2f1bcb760a24b81e2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-e2728433b6\" (UID: \"f539cbe465ba94f2f1bcb760a24b81e2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:11.008301 kubelet[2827]: I0430 03:29:11.008272 2827 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:11.008691 kubelet[2827]: E0430 03:29:11.008661 2827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:11.148437 containerd[1674]: time="2025-04-30T03:29:11.148304652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-e2728433b6,Uid:b8d2e9981d09d20207636dec565f7559,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:11.156506 containerd[1674]: time="2025-04-30T03:29:11.156333935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-e2728433b6,Uid:f539cbe465ba94f2f1bcb760a24b81e2,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:11.163578 containerd[1674]: time="2025-04-30T03:29:11.163268107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-e2728433b6,Uid:d5edd97421b19c079763401335d0222e,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:11.307575 kubelet[2827]: E0430 03:29:11.307526 2827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-e2728433b6?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="800ms" Apr 30 03:29:11.411044 kubelet[2827]: I0430 03:29:11.410853 2827 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:11.411369 kubelet[2827]: E0430 03:29:11.411327 2827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:11.484041 kubelet[2827]: W0430 03:29:11.483985 2827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-e2728433b6&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:11.484041 kubelet[2827]: E0430 03:29:11.484045 2827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-e2728433b6&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:11.633489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120347978.mount: Deactivated successfully. Apr 30 03:29:11.662097 containerd[1674]: time="2025-04-30T03:29:11.661994370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:11.663968 containerd[1674]: time="2025-04-30T03:29:11.663895390Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 30 03:29:11.666702 containerd[1674]: time="2025-04-30T03:29:11.666671119Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:11.670135 containerd[1674]: time="2025-04-30T03:29:11.670107254Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:11.673232 containerd[1674]: time="2025-04-30T03:29:11.673198186Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:11.675652 containerd[1674]: time="2025-04-30T03:29:11.675612311Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:29:11.678654 containerd[1674]: time="2025-04-30T03:29:11.678579042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:29:11.683159 containerd[1674]: time="2025-04-30T03:29:11.683109689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:11.684387 containerd[1674]: time="2025-04-30T03:29:11.683869397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 535.476445ms" Apr 30 03:29:11.688099 containerd[1674]: time="2025-04-30T03:29:11.688064940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 531.658505ms" Apr 30 03:29:11.689083 kubelet[2827]: W0430 03:29:11.689035 2827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:11.689379 kubelet[2827]: E0430 03:29:11.689095 2827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:11.694312 containerd[1674]: time="2025-04-30T03:29:11.694277705Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.918598ms" Apr 30 03:29:11.950529 kubelet[2827]: W0430 03:29:11.950465 2827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:11.950529 kubelet[2827]: E0430 03:29:11.950533 2827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:12.108607 kubelet[2827]: E0430 03:29:12.108540 2827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-e2728433b6?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="1.6s" Apr 30 03:29:12.214937 kubelet[2827]: I0430 03:29:12.214649 2827 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:12.215357 kubelet[2827]: E0430 03:29:12.215076 2827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:12.283738 kubelet[2827]: W0430 03:29:12.283685 2827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:12.283738 kubelet[2827]: E0430 03:29:12.283745 2827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:12.321391 containerd[1674]: time="2025-04-30T03:29:12.317833661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:12.321391 containerd[1674]: time="2025-04-30T03:29:12.319074274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:12.321391 containerd[1674]: time="2025-04-30T03:29:12.319092074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:12.321391 containerd[1674]: time="2025-04-30T03:29:12.319185675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:12.323680 containerd[1674]: time="2025-04-30T03:29:12.323396219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:12.323680 containerd[1674]: time="2025-04-30T03:29:12.323450819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:12.323680 containerd[1674]: time="2025-04-30T03:29:12.323473219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:12.323680 containerd[1674]: time="2025-04-30T03:29:12.323567920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:12.331844 containerd[1674]: time="2025-04-30T03:29:12.331561703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:12.331844 containerd[1674]: time="2025-04-30T03:29:12.331641804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:12.331844 containerd[1674]: time="2025-04-30T03:29:12.331662704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:12.331844 containerd[1674]: time="2025-04-30T03:29:12.331773205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:12.351813 systemd[1]: Started cri-containerd-bcffcb070eb8b74db4f5c71d6b8c121c6d42892f6435b246bf853a31b49fd3da.scope - libcontainer container bcffcb070eb8b74db4f5c71d6b8c121c6d42892f6435b246bf853a31b49fd3da. Apr 30 03:29:12.363306 systemd[1]: Started cri-containerd-2dccb0cbfa5b9c3b79c12411ece8420a3b5702c33d9711fad15a7d4d1ddf76ec.scope - libcontainer container 2dccb0cbfa5b9c3b79c12411ece8420a3b5702c33d9711fad15a7d4d1ddf76ec. Apr 30 03:29:12.365765 systemd[1]: Started cri-containerd-773d30c59b6c10c5b475130667ae98057ab6dd86f481cc826c5cf31d54b14156.scope - libcontainer container 773d30c59b6c10c5b475130667ae98057ab6dd86f481cc826c5cf31d54b14156. Apr 30 03:29:12.448235 containerd[1674]: time="2025-04-30T03:29:12.448008609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-e2728433b6,Uid:b8d2e9981d09d20207636dec565f7559,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcffcb070eb8b74db4f5c71d6b8c121c6d42892f6435b246bf853a31b49fd3da\"" Apr 30 03:29:12.453816 containerd[1674]: time="2025-04-30T03:29:12.453744468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-e2728433b6,Uid:f539cbe465ba94f2f1bcb760a24b81e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"773d30c59b6c10c5b475130667ae98057ab6dd86f481cc826c5cf31d54b14156\"" Apr 30 03:29:12.457233 containerd[1674]: time="2025-04-30T03:29:12.456945901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-e2728433b6,Uid:d5edd97421b19c079763401335d0222e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dccb0cbfa5b9c3b79c12411ece8420a3b5702c33d9711fad15a7d4d1ddf76ec\"" Apr 30 03:29:12.457761 containerd[1674]: time="2025-04-30T03:29:12.457716909Z" level=info msg="CreateContainer within sandbox \"bcffcb070eb8b74db4f5c71d6b8c121c6d42892f6435b246bf853a31b49fd3da\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:29:12.457950 containerd[1674]: time="2025-04-30T03:29:12.457873811Z" level=info msg="CreateContainer within sandbox \"773d30c59b6c10c5b475130667ae98057ab6dd86f481cc826c5cf31d54b14156\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:29:12.460539 containerd[1674]: time="2025-04-30T03:29:12.460421637Z" level=info msg="CreateContainer within sandbox \"2dccb0cbfa5b9c3b79c12411ece8420a3b5702c33d9711fad15a7d4d1ddf76ec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:29:12.520074 containerd[1674]: time="2025-04-30T03:29:12.519953154Z" level=info msg="CreateContainer within sandbox \"bcffcb070eb8b74db4f5c71d6b8c121c6d42892f6435b246bf853a31b49fd3da\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"abe6910303eef1a5f8fc0bda0d0f4c5de9eea0c504427eca457f7ffdd6e1fa41\"" Apr 30 03:29:12.521687 containerd[1674]: time="2025-04-30T03:29:12.521443969Z" level=info msg="StartContainer for \"abe6910303eef1a5f8fc0bda0d0f4c5de9eea0c504427eca457f7ffdd6e1fa41\"" Apr 30 03:29:12.523892 containerd[1674]: time="2025-04-30T03:29:12.523860394Z" level=info msg="CreateContainer within sandbox \"773d30c59b6c10c5b475130667ae98057ab6dd86f481cc826c5cf31d54b14156\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ed07d5ac4b0e99aa39cc672d8b9b54c7522b12af29ea05ad5eca97ff90ca1bba\"" Apr 30 03:29:12.525312 containerd[1674]: time="2025-04-30T03:29:12.525284409Z" level=info msg="StartContainer for \"ed07d5ac4b0e99aa39cc672d8b9b54c7522b12af29ea05ad5eca97ff90ca1bba\"" Apr 30 03:29:12.529618 containerd[1674]: time="2025-04-30T03:29:12.528132938Z" level=info msg="CreateContainer within sandbox \"2dccb0cbfa5b9c3b79c12411ece8420a3b5702c33d9711fad15a7d4d1ddf76ec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0e6e61795836f12054dc80e285f6b2af8e3cb3f6801a22cecd27355884f7c0cd\"" Apr 30 03:29:12.530421 containerd[1674]: time="2025-04-30T03:29:12.530395862Z" level=info msg="StartContainer for \"0e6e61795836f12054dc80e285f6b2af8e3cb3f6801a22cecd27355884f7c0cd\"" Apr 30 03:29:12.578943 systemd[1]: Started cri-containerd-abe6910303eef1a5f8fc0bda0d0f4c5de9eea0c504427eca457f7ffdd6e1fa41.scope - libcontainer container abe6910303eef1a5f8fc0bda0d0f4c5de9eea0c504427eca457f7ffdd6e1fa41. Apr 30 03:29:12.580525 systemd[1]: Started cri-containerd-ed07d5ac4b0e99aa39cc672d8b9b54c7522b12af29ea05ad5eca97ff90ca1bba.scope - libcontainer container ed07d5ac4b0e99aa39cc672d8b9b54c7522b12af29ea05ad5eca97ff90ca1bba. Apr 30 03:29:12.587266 systemd[1]: Started cri-containerd-0e6e61795836f12054dc80e285f6b2af8e3cb3f6801a22cecd27355884f7c0cd.scope - libcontainer container 0e6e61795836f12054dc80e285f6b2af8e3cb3f6801a22cecd27355884f7c0cd. Apr 30 03:29:12.663199 containerd[1674]: time="2025-04-30T03:29:12.663158837Z" level=info msg="StartContainer for \"abe6910303eef1a5f8fc0bda0d0f4c5de9eea0c504427eca457f7ffdd6e1fa41\" returns successfully" Apr 30 03:29:12.697213 containerd[1674]: time="2025-04-30T03:29:12.697167389Z" level=info msg="StartContainer for \"ed07d5ac4b0e99aa39cc672d8b9b54c7522b12af29ea05ad5eca97ff90ca1bba\" returns successfully" Apr 30 03:29:12.710925 containerd[1674]: time="2025-04-30T03:29:12.710881331Z" level=info msg="StartContainer for \"0e6e61795836f12054dc80e285f6b2af8e3cb3f6801a22cecd27355884f7c0cd\" returns successfully" Apr 30 03:29:12.782225 kubelet[2827]: E0430 03:29:12.780640 2827 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.4:6443: connect: connection refused Apr 30 03:29:13.818084 kubelet[2827]: I0430 03:29:13.817613 2827 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:14.848203 kubelet[2827]: E0430 03:29:14.848145 2827 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-a-e2728433b6\" not found" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:14.910626 kubelet[2827]: I0430 03:29:14.908984 2827 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:15.678781 kubelet[2827]: I0430 03:29:15.678740 2827 apiserver.go:52] "Watching apiserver" Apr 30 03:29:15.704189 kubelet[2827]: I0430 03:29:15.704149 2827 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:29:17.570545 kubelet[2827]: W0430 03:29:17.570057 2827 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:17.920922 systemd[1]: Reloading requested from client PID 3108 ('systemctl') (unit session-9.scope)... Apr 30 03:29:17.920938 systemd[1]: Reloading... Apr 30 03:29:18.002638 zram_generator::config[3147]: No configuration found. Apr 30 03:29:18.161660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:29:18.261801 systemd[1]: Reloading finished in 340 ms. Apr 30 03:29:18.303069 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:18.303917 kubelet[2827]: E0430 03:29:18.303000 2827 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081.3.3-a-e2728433b6.183afaf87adf076c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-e2728433b6,UID:ci-4081.3.3-a-e2728433b6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-e2728433b6,},FirstTimestamp:2025-04-30 03:29:10.6797587 +0000 UTC m=+1.205914545,LastTimestamp:2025-04-30 03:29:10.6797587 +0000 UTC m=+1.205914545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-e2728433b6,}" Apr 30 03:29:18.310282 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:29:18.310519 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:18.316875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:18.413154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:18.419814 (kubelet)[3215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:29:18.471618 kubelet[3215]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:18.471618 kubelet[3215]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:29:18.471618 kubelet[3215]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:18.471618 kubelet[3215]: I0430 03:29:18.470813 3215 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:29:18.475935 kubelet[3215]: I0430 03:29:18.475909 3215 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:29:18.476064 kubelet[3215]: I0430 03:29:18.476039 3215 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:29:18.476245 kubelet[3215]: I0430 03:29:18.476226 3215 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:29:18.483505 kubelet[3215]: I0430 03:29:18.478214 3215 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:29:18.483505 kubelet[3215]: I0430 03:29:18.480864 3215 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:29:18.489548 kubelet[3215]: I0430 03:29:18.489526 3215 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:29:18.489794 kubelet[3215]: I0430 03:29:18.489754 3215 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:29:18.489956 kubelet[3215]: I0430 03:29:18.489788 3215 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-e2728433b6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:29:18.490082 kubelet[3215]: I0430 03:29:18.489969 3215 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:29:18.490082 kubelet[3215]: I0430 03:29:18.489981 3215 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:29:18.490082 kubelet[3215]: I0430 03:29:18.490030 3215 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:18.490207 kubelet[3215]: I0430 03:29:18.490138 3215 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:29:18.490207 kubelet[3215]: I0430 03:29:18.490154 3215 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:29:18.490207 kubelet[3215]: I0430 03:29:18.490183 3215 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:29:18.490207 kubelet[3215]: I0430 03:29:18.490205 3215 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:29:18.493941 kubelet[3215]: I0430 03:29:18.493925 3215 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:29:18.494206 kubelet[3215]: I0430 03:29:18.494190 3215 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:29:18.496934 kubelet[3215]: I0430 03:29:18.494790 3215 server.go:1264] "Started kubelet" Apr 30 03:29:18.499344 kubelet[3215]: E0430 03:29:18.498350 3215 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:29:18.499344 kubelet[3215]: I0430 03:29:18.498639 3215 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:29:18.500783 kubelet[3215]: I0430 03:29:18.500174 3215 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:29:18.501276 kubelet[3215]: I0430 03:29:18.501073 3215 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:29:18.501345 kubelet[3215]: I0430 03:29:18.501283 3215 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:29:18.501962 kubelet[3215]: I0430 03:29:18.501641 3215 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:29:18.515908 kubelet[3215]: I0430 03:29:18.515830 3215 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:29:18.519019 kubelet[3215]: I0430 03:29:18.518997 3215 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:29:18.519148 kubelet[3215]: I0430 03:29:18.519136 3215 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:29:18.521082 kubelet[3215]: I0430 03:29:18.521049 3215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:29:18.522562 kubelet[3215]: I0430 03:29:18.522253 3215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:29:18.522562 kubelet[3215]: I0430 03:29:18.522294 3215 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:29:18.522562 kubelet[3215]: I0430 03:29:18.522311 3215 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:29:18.522562 kubelet[3215]: E0430 03:29:18.522351 3215 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:29:18.532799 kubelet[3215]: I0430 03:29:18.532781 3215 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:29:18.533032 kubelet[3215]: I0430 03:29:18.533013 3215 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:29:18.535874 kubelet[3215]: I0430 03:29:18.535523 3215 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:29:18.584826 kubelet[3215]: I0430 03:29:18.584794 3215 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:29:18.584826 kubelet[3215]: I0430 03:29:18.584818 3215 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:29:18.584826 kubelet[3215]: I0430 03:29:18.584839 3215 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:18.585104 kubelet[3215]: I0430 03:29:18.585004 3215 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:29:18.585104 kubelet[3215]: I0430 03:29:18.585030 3215 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:29:18.585104 kubelet[3215]: I0430 03:29:18.585054 3215 policy_none.go:49] "None policy: Start" Apr 30 03:29:18.585680 kubelet[3215]: I0430 03:29:18.585657 3215 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:29:18.585680 kubelet[3215]: I0430 03:29:18.585680 3215 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:29:18.585868 kubelet[3215]: I0430 03:29:18.585832 3215 state_mem.go:75] "Updated machine memory state" Apr 30 03:29:18.590383 kubelet[3215]: I0430 03:29:18.590138 3215 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:29:18.590383 kubelet[3215]: I0430 03:29:18.590323 3215 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:29:18.590643 kubelet[3215]: I0430 03:29:18.590629 3215 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:29:18.619491 kubelet[3215]: I0430 03:29:18.619463 3215 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.622861 kubelet[3215]: I0430 03:29:18.622815 3215 topology_manager.go:215] "Topology Admit Handler" podUID="b8d2e9981d09d20207636dec565f7559" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.622969 kubelet[3215]: I0430 03:29:18.622913 3215 topology_manager.go:215] "Topology Admit Handler" podUID="f539cbe465ba94f2f1bcb760a24b81e2" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.623020 kubelet[3215]: I0430 03:29:18.622989 3215 topology_manager.go:215] "Topology Admit Handler" podUID="d5edd97421b19c079763401335d0222e" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.634195 kubelet[3215]: W0430 03:29:18.633255 3215 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:18.634195 kubelet[3215]: W0430 03:29:18.633511 3215 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:18.634891 kubelet[3215]: W0430 03:29:18.634870 3215 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:18.635044 kubelet[3215]: E0430 03:29:18.635017 3215 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.3-a-e2728433b6\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.636747 kubelet[3215]: I0430 03:29:18.636731 3215 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.636962 kubelet[3215]: I0430 03:29:18.636860 3215 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.820735 kubelet[3215]: I0430 03:29:18.820511 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f539cbe465ba94f2f1bcb760a24b81e2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-e2728433b6\" (UID: \"f539cbe465ba94f2f1bcb760a24b81e2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.820735 kubelet[3215]: I0430 03:29:18.820560 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8d2e9981d09d20207636dec565f7559-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-e2728433b6\" (UID: \"b8d2e9981d09d20207636dec565f7559\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.820735 kubelet[3215]: I0430 03:29:18.820586 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8d2e9981d09d20207636dec565f7559-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-e2728433b6\" (UID: \"b8d2e9981d09d20207636dec565f7559\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.820735 kubelet[3215]: I0430 03:29:18.820622 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8d2e9981d09d20207636dec565f7559-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-e2728433b6\" (UID: \"b8d2e9981d09d20207636dec565f7559\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.820735 kubelet[3215]: I0430 03:29:18.820646 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f539cbe465ba94f2f1bcb760a24b81e2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-e2728433b6\" (UID: \"f539cbe465ba94f2f1bcb760a24b81e2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.821078 kubelet[3215]: I0430 03:29:18.820697 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f539cbe465ba94f2f1bcb760a24b81e2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-e2728433b6\" (UID: \"f539cbe465ba94f2f1bcb760a24b81e2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.821078 kubelet[3215]: I0430 03:29:18.820721 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5edd97421b19c079763401335d0222e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-e2728433b6\" (UID: \"d5edd97421b19c079763401335d0222e\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.821078 kubelet[3215]: I0430 03:29:18.820743 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f539cbe465ba94f2f1bcb760a24b81e2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-e2728433b6\" (UID: \"f539cbe465ba94f2f1bcb760a24b81e2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:18.821078 kubelet[3215]: I0430 03:29:18.820763 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f539cbe465ba94f2f1bcb760a24b81e2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-e2728433b6\" (UID: \"f539cbe465ba94f2f1bcb760a24b81e2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:19.491223 kubelet[3215]: I0430 03:29:19.490858 3215 apiserver.go:52] "Watching apiserver" Apr 30 03:29:19.519301 kubelet[3215]: I0430 03:29:19.519176 3215 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:29:19.584937 kubelet[3215]: W0430 03:29:19.584798 3215 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:19.584937 kubelet[3215]: E0430 03:29:19.584878 3215 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-a-e2728433b6\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-a-e2728433b6" Apr 30 03:29:19.622316 kubelet[3215]: I0430 03:29:19.621783 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-a-e2728433b6" podStartSLOduration=1.621762508 podStartE2EDuration="1.621762508s" podCreationTimestamp="2025-04-30 03:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:19.6113539 +0000 UTC m=+1.186745310" watchObservedRunningTime="2025-04-30 03:29:19.621762508 +0000 UTC m=+1.197154018" Apr 30 03:29:19.635551 kubelet[3215]: I0430 03:29:19.635367 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-a-e2728433b6" podStartSLOduration=1.6353523490000001 podStartE2EDuration="1.635352349s" podCreationTimestamp="2025-04-30 03:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:19.622250713 +0000 UTC m=+1.197642123" watchObservedRunningTime="2025-04-30 03:29:19.635352349 +0000 UTC m=+1.210743759" Apr 30 03:29:19.647094 kubelet[3215]: I0430 03:29:19.646951 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-e2728433b6" podStartSLOduration=2.646933169 podStartE2EDuration="2.646933169s" podCreationTimestamp="2025-04-30 03:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:19.635915955 +0000 UTC m=+1.211307365" watchObservedRunningTime="2025-04-30 03:29:19.646933169 +0000 UTC m=+1.222324679" Apr 30 03:29:24.271315 sudo[2188]: pam_unix(sudo:session): session closed for user root Apr 30 03:29:24.371388 sshd[2185]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:24.375663 systemd[1]: sshd@6-10.200.8.4:22-10.200.16.10:58882.service: Deactivated successfully. Apr 30 03:29:24.377648 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:29:24.377856 systemd[1]: session-9.scope: Consumed 4.891s CPU time, 186.9M memory peak, 0B memory swap peak. Apr 30 03:29:24.378616 systemd-logind[1656]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:29:24.379945 systemd-logind[1656]: Removed session 9. Apr 30 03:29:31.066163 kubelet[3215]: I0430 03:29:31.066118 3215 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:29:31.068768 containerd[1674]: time="2025-04-30T03:29:31.068224411Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:29:31.069141 kubelet[3215]: I0430 03:29:31.068553 3215 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:29:31.078508 kubelet[3215]: I0430 03:29:31.078469 3215 topology_manager.go:215] "Topology Admit Handler" podUID="3a1184fb-6ae6-4ad4-a146-10684c6bddb9" podNamespace="kube-system" podName="kube-proxy-7vzx6" Apr 30 03:29:31.089928 systemd[1]: Created slice kubepods-besteffort-pod3a1184fb_6ae6_4ad4_a146_10684c6bddb9.slice - libcontainer container kubepods-besteffort-pod3a1184fb_6ae6_4ad4_a146_10684c6bddb9.slice. Apr 30 03:29:31.107230 kubelet[3215]: I0430 03:29:31.107184 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a1184fb-6ae6-4ad4-a146-10684c6bddb9-xtables-lock\") pod \"kube-proxy-7vzx6\" (UID: \"3a1184fb-6ae6-4ad4-a146-10684c6bddb9\") " pod="kube-system/kube-proxy-7vzx6" Apr 30 03:29:31.107387 kubelet[3215]: I0430 03:29:31.107238 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsrqz\" (UniqueName: \"kubernetes.io/projected/3a1184fb-6ae6-4ad4-a146-10684c6bddb9-kube-api-access-qsrqz\") pod \"kube-proxy-7vzx6\" (UID: \"3a1184fb-6ae6-4ad4-a146-10684c6bddb9\") " pod="kube-system/kube-proxy-7vzx6" Apr 30 03:29:31.107387 kubelet[3215]: I0430 03:29:31.107269 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a1184fb-6ae6-4ad4-a146-10684c6bddb9-kube-proxy\") pod \"kube-proxy-7vzx6\" (UID: \"3a1184fb-6ae6-4ad4-a146-10684c6bddb9\") " pod="kube-system/kube-proxy-7vzx6" Apr 30 03:29:31.107387 kubelet[3215]: I0430 03:29:31.107288 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a1184fb-6ae6-4ad4-a146-10684c6bddb9-lib-modules\") pod \"kube-proxy-7vzx6\" (UID: \"3a1184fb-6ae6-4ad4-a146-10684c6bddb9\") " pod="kube-system/kube-proxy-7vzx6" Apr 30 03:29:31.213519 kubelet[3215]: E0430 03:29:31.213485 3215 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 03:29:31.213519 kubelet[3215]: E0430 03:29:31.213517 3215 projected.go:200] Error preparing data for projected volume kube-api-access-qsrqz for pod kube-system/kube-proxy-7vzx6: configmap "kube-root-ca.crt" not found Apr 30 03:29:31.213790 kubelet[3215]: E0430 03:29:31.213584 3215 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3a1184fb-6ae6-4ad4-a146-10684c6bddb9-kube-api-access-qsrqz podName:3a1184fb-6ae6-4ad4-a146-10684c6bddb9 nodeName:}" failed. No retries permitted until 2025-04-30 03:29:31.713561747 +0000 UTC m=+13.288953157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qsrqz" (UniqueName: "kubernetes.io/projected/3a1184fb-6ae6-4ad4-a146-10684c6bddb9-kube-api-access-qsrqz") pod "kube-proxy-7vzx6" (UID: "3a1184fb-6ae6-4ad4-a146-10684c6bddb9") : configmap "kube-root-ca.crt" not found Apr 30 03:29:31.726021 kubelet[3215]: I0430 03:29:31.724470 3215 topology_manager.go:215] "Topology Admit Handler" podUID="72c47bf0-54a5-4b7c-9e92-24b29ce2db56" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-kphkm" Apr 30 03:29:31.733990 systemd[1]: Created slice kubepods-besteffort-pod72c47bf0_54a5_4b7c_9e92_24b29ce2db56.slice - libcontainer container kubepods-besteffort-pod72c47bf0_54a5_4b7c_9e92_24b29ce2db56.slice. Apr 30 03:29:31.812708 kubelet[3215]: I0430 03:29:31.812661 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/72c47bf0-54a5-4b7c-9e92-24b29ce2db56-var-lib-calico\") pod \"tigera-operator-797db67f8-kphkm\" (UID: \"72c47bf0-54a5-4b7c-9e92-24b29ce2db56\") " pod="tigera-operator/tigera-operator-797db67f8-kphkm" Apr 30 03:29:31.812708 kubelet[3215]: I0430 03:29:31.812704 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjl4l\" (UniqueName: \"kubernetes.io/projected/72c47bf0-54a5-4b7c-9e92-24b29ce2db56-kube-api-access-qjl4l\") pod \"tigera-operator-797db67f8-kphkm\" (UID: \"72c47bf0-54a5-4b7c-9e92-24b29ce2db56\") " pod="tigera-operator/tigera-operator-797db67f8-kphkm" Apr 30 03:29:31.997976 containerd[1674]: time="2025-04-30T03:29:31.997809497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7vzx6,Uid:3a1184fb-6ae6-4ad4-a146-10684c6bddb9,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:32.036214 containerd[1674]: time="2025-04-30T03:29:32.035995174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:32.036214 containerd[1674]: time="2025-04-30T03:29:32.036044075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:32.036214 containerd[1674]: time="2025-04-30T03:29:32.036063275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:32.036214 containerd[1674]: time="2025-04-30T03:29:32.036146776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:32.040895 containerd[1674]: time="2025-04-30T03:29:32.040849222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-kphkm,Uid:72c47bf0-54a5-4b7c-9e92-24b29ce2db56,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:29:32.058747 systemd[1]: Started cri-containerd-61e2aa10229545c65dc22d225003e2fd0032a0e12533f90fbf02a3219059cb38.scope - libcontainer container 61e2aa10229545c65dc22d225003e2fd0032a0e12533f90fbf02a3219059cb38. Apr 30 03:29:32.093303 containerd[1674]: time="2025-04-30T03:29:32.093258140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7vzx6,Uid:3a1184fb-6ae6-4ad4-a146-10684c6bddb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"61e2aa10229545c65dc22d225003e2fd0032a0e12533f90fbf02a3219059cb38\"" Apr 30 03:29:32.096843 containerd[1674]: time="2025-04-30T03:29:32.096803675Z" level=info msg="CreateContainer within sandbox \"61e2aa10229545c65dc22d225003e2fd0032a0e12533f90fbf02a3219059cb38\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:29:32.101772 containerd[1674]: time="2025-04-30T03:29:32.101543122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:32.101772 containerd[1674]: time="2025-04-30T03:29:32.101679523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:32.101772 containerd[1674]: time="2025-04-30T03:29:32.101713424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:32.101944 containerd[1674]: time="2025-04-30T03:29:32.101850725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:32.120769 systemd[1]: Started cri-containerd-ecfe5ef4bf41bb05db3adc9ca28a05c11887142ed85453f2e7064cf8cb10caa7.scope - libcontainer container ecfe5ef4bf41bb05db3adc9ca28a05c11887142ed85453f2e7064cf8cb10caa7. Apr 30 03:29:32.131967 containerd[1674]: time="2025-04-30T03:29:32.131923922Z" level=info msg="CreateContainer within sandbox \"61e2aa10229545c65dc22d225003e2fd0032a0e12533f90fbf02a3219059cb38\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d8780b8a22dc47e8ea1eeedb3ae6a58153e46329c6fdca85aff6fcb481e9c85\"" Apr 30 03:29:32.133010 containerd[1674]: time="2025-04-30T03:29:32.132981733Z" level=info msg="StartContainer for \"2d8780b8a22dc47e8ea1eeedb3ae6a58153e46329c6fdca85aff6fcb481e9c85\"" Apr 30 03:29:32.174770 systemd[1]: Started cri-containerd-2d8780b8a22dc47e8ea1eeedb3ae6a58153e46329c6fdca85aff6fcb481e9c85.scope - libcontainer container 2d8780b8a22dc47e8ea1eeedb3ae6a58153e46329c6fdca85aff6fcb481e9c85. Apr 30 03:29:32.175980 containerd[1674]: time="2025-04-30T03:29:32.175942557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-kphkm,Uid:72c47bf0-54a5-4b7c-9e92-24b29ce2db56,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ecfe5ef4bf41bb05db3adc9ca28a05c11887142ed85453f2e7064cf8cb10caa7\"" Apr 30 03:29:32.179614 containerd[1674]: time="2025-04-30T03:29:32.179341591Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:29:32.209110 containerd[1674]: time="2025-04-30T03:29:32.208575180Z" level=info msg="StartContainer for \"2d8780b8a22dc47e8ea1eeedb3ae6a58153e46329c6fdca85aff6fcb481e9c85\" returns successfully" Apr 30 03:29:32.605718 kubelet[3215]: I0430 03:29:32.605560 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7vzx6" podStartSLOduration=1.605540502 podStartE2EDuration="1.605540502s" podCreationTimestamp="2025-04-30 03:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:32.605526902 +0000 UTC m=+14.180918412" watchObservedRunningTime="2025-04-30 03:29:32.605540502 +0000 UTC m=+14.180932012" Apr 30 03:29:33.580708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount203727839.mount: Deactivated successfully. Apr 30 03:29:34.393310 containerd[1674]: time="2025-04-30T03:29:34.393260268Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:34.395271 containerd[1674]: time="2025-04-30T03:29:34.395165587Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:29:34.399492 containerd[1674]: time="2025-04-30T03:29:34.399434729Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:34.403709 containerd[1674]: time="2025-04-30T03:29:34.403653371Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:34.404527 containerd[1674]: time="2025-04-30T03:29:34.404388378Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.224995387s" Apr 30 03:29:34.404527 containerd[1674]: time="2025-04-30T03:29:34.404426779Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:29:34.407031 containerd[1674]: time="2025-04-30T03:29:34.406812302Z" level=info msg="CreateContainer within sandbox \"ecfe5ef4bf41bb05db3adc9ca28a05c11887142ed85453f2e7064cf8cb10caa7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:29:34.439778 containerd[1674]: time="2025-04-30T03:29:34.439736627Z" level=info msg="CreateContainer within sandbox \"ecfe5ef4bf41bb05db3adc9ca28a05c11887142ed85453f2e7064cf8cb10caa7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1b5e6256f34df5dc621f306c3b1a1c62243a993504ba42e9318ea6e16103183f\"" Apr 30 03:29:34.441266 containerd[1674]: time="2025-04-30T03:29:34.440319033Z" level=info msg="StartContainer for \"1b5e6256f34df5dc621f306c3b1a1c62243a993504ba42e9318ea6e16103183f\"" Apr 30 03:29:34.468404 systemd[1]: run-containerd-runc-k8s.io-1b5e6256f34df5dc621f306c3b1a1c62243a993504ba42e9318ea6e16103183f-runc.XISRIu.mount: Deactivated successfully. Apr 30 03:29:34.474750 systemd[1]: Started cri-containerd-1b5e6256f34df5dc621f306c3b1a1c62243a993504ba42e9318ea6e16103183f.scope - libcontainer container 1b5e6256f34df5dc621f306c3b1a1c62243a993504ba42e9318ea6e16103183f. Apr 30 03:29:34.503907 containerd[1674]: time="2025-04-30T03:29:34.503799860Z" level=info msg="StartContainer for \"1b5e6256f34df5dc621f306c3b1a1c62243a993504ba42e9318ea6e16103183f\" returns successfully" Apr 30 03:29:37.676627 kubelet[3215]: I0430 03:29:37.673360 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-kphkm" podStartSLOduration=4.44592478 podStartE2EDuration="6.673338091s" podCreationTimestamp="2025-04-30 03:29:31 +0000 UTC" firstStartedPulling="2025-04-30 03:29:32.177922277 +0000 UTC m=+13.753313687" lastFinishedPulling="2025-04-30 03:29:34.405335588 +0000 UTC m=+15.980726998" observedRunningTime="2025-04-30 03:29:34.607388184 +0000 UTC m=+16.182779594" watchObservedRunningTime="2025-04-30 03:29:37.673338091 +0000 UTC m=+19.248729601" Apr 30 03:29:37.676627 kubelet[3215]: I0430 03:29:37.673534 3215 topology_manager.go:215] "Topology Admit Handler" podUID="dd3ded02-47ea-4d27-8d20-73c272393e35" podNamespace="calico-system" podName="calico-typha-66dd6747d8-sg22f" Apr 30 03:29:37.684513 systemd[1]: Created slice kubepods-besteffort-poddd3ded02_47ea_4d27_8d20_73c272393e35.slice - libcontainer container kubepods-besteffort-poddd3ded02_47ea_4d27_8d20_73c272393e35.slice. Apr 30 03:29:37.749057 kubelet[3215]: I0430 03:29:37.749018 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd3ded02-47ea-4d27-8d20-73c272393e35-typha-certs\") pod \"calico-typha-66dd6747d8-sg22f\" (UID: \"dd3ded02-47ea-4d27-8d20-73c272393e35\") " pod="calico-system/calico-typha-66dd6747d8-sg22f" Apr 30 03:29:37.749563 kubelet[3215]: I0430 03:29:37.749065 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd3ded02-47ea-4d27-8d20-73c272393e35-tigera-ca-bundle\") pod \"calico-typha-66dd6747d8-sg22f\" (UID: \"dd3ded02-47ea-4d27-8d20-73c272393e35\") " pod="calico-system/calico-typha-66dd6747d8-sg22f" Apr 30 03:29:37.749563 kubelet[3215]: I0430 03:29:37.749092 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqcqx\" (UniqueName: \"kubernetes.io/projected/dd3ded02-47ea-4d27-8d20-73c272393e35-kube-api-access-rqcqx\") pod \"calico-typha-66dd6747d8-sg22f\" (UID: \"dd3ded02-47ea-4d27-8d20-73c272393e35\") " pod="calico-system/calico-typha-66dd6747d8-sg22f" Apr 30 03:29:37.775944 kubelet[3215]: I0430 03:29:37.775875 3215 topology_manager.go:215] "Topology Admit Handler" podUID="b068409d-772c-47d4-9ea6-2d9868ae1737" podNamespace="calico-system" podName="calico-node-bd5vn" Apr 30 03:29:37.785648 systemd[1]: Created slice kubepods-besteffort-podb068409d_772c_47d4_9ea6_2d9868ae1737.slice - libcontainer container kubepods-besteffort-podb068409d_772c_47d4_9ea6_2d9868ae1737.slice. Apr 30 03:29:37.850340 kubelet[3215]: I0430 03:29:37.850297 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b068409d-772c-47d4-9ea6-2d9868ae1737-xtables-lock\") pod \"calico-node-bd5vn\" (UID: \"b068409d-772c-47d4-9ea6-2d9868ae1737\") " pod="calico-system/calico-node-bd5vn" Apr 30 03:29:37.850340 kubelet[3215]: I0430 03:29:37.850346 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b068409d-772c-47d4-9ea6-2d9868ae1737-policysync\") pod \"calico-node-bd5vn\" (UID: \"b068409d-772c-47d4-9ea6-2d9868ae1737\") " pod="calico-system/calico-node-bd5vn" Apr 30 03:29:37.850552 kubelet[3215]: I0430 03:29:37.850367 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b068409d-772c-47d4-9ea6-2d9868ae1737-node-certs\") pod \"calico-node-bd5vn\" (UID: \"b068409d-772c-47d4-9ea6-2d9868ae1737\") " pod="calico-system/calico-node-bd5vn" Apr 30 03:29:37.850552 kubelet[3215]: I0430 03:29:37.850388 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4rcj\" (UniqueName: \"kubernetes.io/projected/b068409d-772c-47d4-9ea6-2d9868ae1737-kube-api-access-z4rcj\") pod \"calico-node-bd5vn\" (UID: \"b068409d-772c-47d4-9ea6-2d9868ae1737\") " pod="calico-system/calico-node-bd5vn" Apr 30 03:29:37.850552 kubelet[3215]: I0430 03:29:37.850413 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b068409d-772c-47d4-9ea6-2d9868ae1737-cni-net-dir\") pod \"calico-node-bd5vn\" (UID: \"b068409d-772c-47d4-9ea6-2d9868ae1737\") " pod="calico-system/calico-node-bd5vn" Apr 30 03:29:37.850552 kubelet[3215]: I0430 03:29:37.850434 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b068409d-772c-47d4-9ea6-2d9868ae1737-cni-log-dir\") pod \"calico-node-bd5vn\" (UID: \"b068409d-772c-47d4-9ea6-2d9868ae1737\") " pod="calico-system/calico-node-bd5vn" Apr 30 03:29:37.850552 kubelet[3215]: I0430 03:29:37.850455 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b068409d-772c-47d4-9ea6-2d9868ae1737-var-lib-calico\") pod \"calico-node-bd5vn\" (UID: \"b068409d-772c-47d4-9ea6-2d9868ae1737\") " pod="calico-system/calico-node-bd5vn" Apr 30 03:29:37.850777 kubelet[3215]: I0430 03:29:37.850476 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b068409d-772c-47d4-9ea6-2d9868ae1737-lib-modules\") pod \"calico-node-bd5vn\" (UID: \"b068409d-772c-47d4-9ea6-2d9868ae1737\") " pod="calico-system/calico-node-bd5vn" Apr 30 03:29:37.850777 kubelet[3215]: I0430 03:29:37.850494 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b068409d-772c-47d4-9ea6-2d9868ae1737-cni-bin-dir\") pod \"calico-node-bd5vn\" (UID: \"b068409d-772c-47d4-9ea6-2d9868ae1737\") " pod="calico-system/calico-node-bd5vn" Apr 30 03:29:37.850777 kubelet[3215]: I0430 03:29:37.850555 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b068409d-772c-47d4-9ea6-2d9868ae1737-var-run-calico\") pod \"calico-node-bd5vn\" (UID: \"b068409d-772c-47d4-9ea6-2d9868ae1737\") " pod="calico-system/calico-node-bd5vn" Apr 30 03:29:37.850777 kubelet[3215]: I0430 03:29:37.850578 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b068409d-772c-47d4-9ea6-2d9868ae1737-flexvol-driver-host\") pod \"calico-node-bd5vn\" (UID: \"b068409d-772c-47d4-9ea6-2d9868ae1737\") " pod="calico-system/calico-node-bd5vn" Apr 30 03:29:37.850777 kubelet[3215]: I0430 03:29:37.850635 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b068409d-772c-47d4-9ea6-2d9868ae1737-tigera-ca-bundle\") pod \"calico-node-bd5vn\" (UID: \"b068409d-772c-47d4-9ea6-2d9868ae1737\") " pod="calico-system/calico-node-bd5vn" Apr 30 03:29:37.915736 kubelet[3215]: I0430 03:29:37.915690 3215 topology_manager.go:215] "Topology Admit Handler" podUID="dcdc5f6d-cefa-4e15-8498-441a243c70ee" podNamespace="calico-system" podName="csi-node-driver-kz4tb" Apr 30 03:29:37.916850 kubelet[3215]: E0430 03:29:37.916678 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kz4tb" podUID="dcdc5f6d-cefa-4e15-8498-441a243c70ee" Apr 30 03:29:37.952793 kubelet[3215]: I0430 03:29:37.951706 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dcdc5f6d-cefa-4e15-8498-441a243c70ee-registration-dir\") pod \"csi-node-driver-kz4tb\" (UID: \"dcdc5f6d-cefa-4e15-8498-441a243c70ee\") " pod="calico-system/csi-node-driver-kz4tb" Apr 30 03:29:37.952793 kubelet[3215]: I0430 03:29:37.951762 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dcdc5f6d-cefa-4e15-8498-441a243c70ee-varrun\") pod \"csi-node-driver-kz4tb\" (UID: \"dcdc5f6d-cefa-4e15-8498-441a243c70ee\") " pod="calico-system/csi-node-driver-kz4tb" Apr 30 03:29:37.952793 kubelet[3215]: I0430 03:29:37.951838 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dcdc5f6d-cefa-4e15-8498-441a243c70ee-socket-dir\") pod \"csi-node-driver-kz4tb\" (UID: \"dcdc5f6d-cefa-4e15-8498-441a243c70ee\") " pod="calico-system/csi-node-driver-kz4tb" Apr 30 03:29:37.952793 kubelet[3215]: I0430 03:29:37.951865 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6jzx\" (UniqueName: \"kubernetes.io/projected/dcdc5f6d-cefa-4e15-8498-441a243c70ee-kube-api-access-g6jzx\") pod \"csi-node-driver-kz4tb\" (UID: \"dcdc5f6d-cefa-4e15-8498-441a243c70ee\") " pod="calico-system/csi-node-driver-kz4tb" Apr 30 03:29:37.952793 kubelet[3215]: I0430 03:29:37.951911 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcdc5f6d-cefa-4e15-8498-441a243c70ee-kubelet-dir\") pod \"csi-node-driver-kz4tb\" (UID: \"dcdc5f6d-cefa-4e15-8498-441a243c70ee\") " pod="calico-system/csi-node-driver-kz4tb" Apr 30 03:29:37.955769 kubelet[3215]: E0430 03:29:37.955742 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.955936 kubelet[3215]: W0430 03:29:37.955915 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.956096 kubelet[3215]: E0430 03:29:37.956076 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.959004 kubelet[3215]: E0430 03:29:37.958987 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.960788 kubelet[3215]: W0430 03:29:37.960766 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.961073 kubelet[3215]: E0430 03:29:37.960949 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.961668 kubelet[3215]: E0430 03:29:37.961647 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.961668 kubelet[3215]: W0430 03:29:37.961667 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.961864 kubelet[3215]: E0430 03:29:37.961728 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.963713 kubelet[3215]: E0430 03:29:37.962763 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.963713 kubelet[3215]: W0430 03:29:37.962778 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.963713 kubelet[3215]: E0430 03:29:37.962983 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.963713 kubelet[3215]: W0430 03:29:37.962994 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.963988 kubelet[3215]: E0430 03:29:37.963941 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.963988 kubelet[3215]: E0430 03:29:37.963967 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.964531 kubelet[3215]: E0430 03:29:37.964513 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.964531 kubelet[3215]: W0430 03:29:37.964530 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.965682 kubelet[3215]: E0430 03:29:37.964659 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.965682 kubelet[3215]: E0430 03:29:37.964804 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.965682 kubelet[3215]: W0430 03:29:37.964816 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.965682 kubelet[3215]: E0430 03:29:37.965034 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.965682 kubelet[3215]: W0430 03:29:37.965045 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.965682 kubelet[3215]: E0430 03:29:37.965211 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.965682 kubelet[3215]: W0430 03:29:37.965220 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.965682 kubelet[3215]: E0430 03:29:37.965630 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.965682 kubelet[3215]: E0430 03:29:37.965647 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.965682 kubelet[3215]: E0430 03:29:37.965659 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.966843 kubelet[3215]: E0430 03:29:37.965818 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.966843 kubelet[3215]: W0430 03:29:37.965831 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.966843 kubelet[3215]: E0430 03:29:37.965935 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.966843 kubelet[3215]: E0430 03:29:37.966080 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.966843 kubelet[3215]: W0430 03:29:37.966090 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.966843 kubelet[3215]: E0430 03:29:37.966173 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.966843 kubelet[3215]: E0430 03:29:37.966321 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.966843 kubelet[3215]: W0430 03:29:37.966331 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.966843 kubelet[3215]: E0430 03:29:37.966441 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.966843 kubelet[3215]: E0430 03:29:37.966617 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.968897 kubelet[3215]: W0430 03:29:37.966629 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.968897 kubelet[3215]: E0430 03:29:37.966711 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.968897 kubelet[3215]: E0430 03:29:37.966868 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.968897 kubelet[3215]: W0430 03:29:37.966877 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.968897 kubelet[3215]: E0430 03:29:37.966959 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.968897 kubelet[3215]: E0430 03:29:37.967077 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.968897 kubelet[3215]: W0430 03:29:37.967086 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.968897 kubelet[3215]: E0430 03:29:37.967114 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.968897 kubelet[3215]: E0430 03:29:37.967357 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.968897 kubelet[3215]: W0430 03:29:37.967369 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.969442 kubelet[3215]: E0430 03:29:37.967394 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.969442 kubelet[3215]: E0430 03:29:37.967659 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.969442 kubelet[3215]: W0430 03:29:37.967672 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.969442 kubelet[3215]: E0430 03:29:37.967688 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.969442 kubelet[3215]: E0430 03:29:37.967890 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.969442 kubelet[3215]: W0430 03:29:37.967901 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.969442 kubelet[3215]: E0430 03:29:37.967927 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.969442 kubelet[3215]: E0430 03:29:37.968145 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.969442 kubelet[3215]: W0430 03:29:37.968156 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.969442 kubelet[3215]: E0430 03:29:37.968185 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.972232 kubelet[3215]: E0430 03:29:37.968437 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.972232 kubelet[3215]: W0430 03:29:37.968451 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.972232 kubelet[3215]: E0430 03:29:37.968530 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.972232 kubelet[3215]: E0430 03:29:37.968724 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.972232 kubelet[3215]: W0430 03:29:37.968736 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.972232 kubelet[3215]: E0430 03:29:37.968817 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.972232 kubelet[3215]: E0430 03:29:37.968953 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.972232 kubelet[3215]: W0430 03:29:37.968961 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.972232 kubelet[3215]: E0430 03:29:37.969039 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.972232 kubelet[3215]: E0430 03:29:37.969169 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.973167 kubelet[3215]: W0430 03:29:37.969177 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.973167 kubelet[3215]: E0430 03:29:37.969251 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.973167 kubelet[3215]: E0430 03:29:37.969384 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.973167 kubelet[3215]: W0430 03:29:37.969393 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.973167 kubelet[3215]: E0430 03:29:37.969409 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.973167 kubelet[3215]: E0430 03:29:37.969676 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.973167 kubelet[3215]: W0430 03:29:37.969687 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.973167 kubelet[3215]: E0430 03:29:37.969714 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.973167 kubelet[3215]: E0430 03:29:37.969921 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.973167 kubelet[3215]: W0430 03:29:37.969932 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.973542 kubelet[3215]: E0430 03:29:37.969958 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.973542 kubelet[3215]: E0430 03:29:37.970195 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.973542 kubelet[3215]: W0430 03:29:37.970206 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.973542 kubelet[3215]: E0430 03:29:37.970223 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.973542 kubelet[3215]: E0430 03:29:37.970721 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:37.973542 kubelet[3215]: W0430 03:29:37.970734 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:37.973542 kubelet[3215]: E0430 03:29:37.970748 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:37.992837 containerd[1674]: time="2025-04-30T03:29:37.992787670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66dd6747d8-sg22f,Uid:dd3ded02-47ea-4d27-8d20-73c272393e35,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:38.013851 kubelet[3215]: E0430 03:29:38.013818 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.013851 kubelet[3215]: W0430 03:29:38.013849 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.014039 kubelet[3215]: E0430 03:29:38.013874 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.052929 kubelet[3215]: E0430 03:29:38.052895 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.052929 kubelet[3215]: W0430 03:29:38.052923 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.053496 kubelet[3215]: E0430 03:29:38.052950 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.054068 kubelet[3215]: E0430 03:29:38.054047 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.054068 kubelet[3215]: W0430 03:29:38.054069 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.054848 containerd[1674]: time="2025-04-30T03:29:38.053566775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:38.054848 containerd[1674]: time="2025-04-30T03:29:38.053641276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:38.054848 containerd[1674]: time="2025-04-30T03:29:38.053683876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:38.054848 containerd[1674]: time="2025-04-30T03:29:38.053790677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:38.055048 kubelet[3215]: E0430 03:29:38.054641 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.055861 kubelet[3215]: E0430 03:29:38.055261 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.055861 kubelet[3215]: W0430 03:29:38.055278 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.055861 kubelet[3215]: E0430 03:29:38.055311 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.056626 kubelet[3215]: E0430 03:29:38.056107 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.056626 kubelet[3215]: W0430 03:29:38.056123 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.056626 kubelet[3215]: E0430 03:29:38.056236 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.057095 kubelet[3215]: E0430 03:29:38.057024 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.057095 kubelet[3215]: W0430 03:29:38.057041 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.057555 kubelet[3215]: E0430 03:29:38.057389 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.058191 kubelet[3215]: E0430 03:29:38.058014 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.058191 kubelet[3215]: W0430 03:29:38.058029 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.058396 kubelet[3215]: E0430 03:29:38.058291 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.059320 kubelet[3215]: E0430 03:29:38.058711 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.059320 kubelet[3215]: W0430 03:29:38.058727 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.059320 kubelet[3215]: E0430 03:29:38.058867 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.059320 kubelet[3215]: E0430 03:29:38.059120 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.059320 kubelet[3215]: W0430 03:29:38.059132 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.059320 kubelet[3215]: E0430 03:29:38.059192 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.060362 kubelet[3215]: E0430 03:29:38.059454 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.060362 kubelet[3215]: W0430 03:29:38.059464 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.060362 kubelet[3215]: E0430 03:29:38.059483 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.060362 kubelet[3215]: E0430 03:29:38.059943 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.060362 kubelet[3215]: W0430 03:29:38.059959 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.061541 kubelet[3215]: E0430 03:29:38.060083 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.061541 kubelet[3215]: E0430 03:29:38.060840 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.061541 kubelet[3215]: W0430 03:29:38.060852 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.061541 kubelet[3215]: E0430 03:29:38.061110 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.063304 kubelet[3215]: E0430 03:29:38.062691 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.063304 kubelet[3215]: W0430 03:29:38.062707 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.063304 kubelet[3215]: E0430 03:29:38.062795 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.064478 kubelet[3215]: E0430 03:29:38.064105 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.064478 kubelet[3215]: W0430 03:29:38.064121 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.064478 kubelet[3215]: E0430 03:29:38.064228 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.064478 kubelet[3215]: E0430 03:29:38.064379 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.064478 kubelet[3215]: W0430 03:29:38.064390 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.064478 kubelet[3215]: E0430 03:29:38.064473 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.064881 kubelet[3215]: E0430 03:29:38.064646 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.064881 kubelet[3215]: W0430 03:29:38.064656 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.066083 kubelet[3215]: E0430 03:29:38.065625 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.066083 kubelet[3215]: E0430 03:29:38.065801 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.066083 kubelet[3215]: W0430 03:29:38.065813 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.066083 kubelet[3215]: E0430 03:29:38.065905 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.066083 kubelet[3215]: E0430 03:29:38.066052 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.066083 kubelet[3215]: W0430 03:29:38.066063 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.066821 kubelet[3215]: E0430 03:29:38.066465 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.066821 kubelet[3215]: E0430 03:29:38.066661 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.066821 kubelet[3215]: W0430 03:29:38.066672 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.067461 kubelet[3215]: E0430 03:29:38.066938 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.067461 kubelet[3215]: E0430 03:29:38.067360 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.067461 kubelet[3215]: W0430 03:29:38.067373 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.068344 kubelet[3215]: E0430 03:29:38.067625 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.068428 kubelet[3215]: E0430 03:29:38.068376 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.068428 kubelet[3215]: W0430 03:29:38.068392 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.068759 kubelet[3215]: E0430 03:29:38.068573 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.069232 kubelet[3215]: E0430 03:29:38.069124 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.069232 kubelet[3215]: W0430 03:29:38.069140 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.069349 kubelet[3215]: E0430 03:29:38.069291 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.070224 kubelet[3215]: E0430 03:29:38.069912 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.070224 kubelet[3215]: W0430 03:29:38.070024 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.070351 kubelet[3215]: E0430 03:29:38.070275 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.071142 kubelet[3215]: E0430 03:29:38.070935 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.071142 kubelet[3215]: W0430 03:29:38.070951 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.071477 kubelet[3215]: E0430 03:29:38.071237 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.072115 kubelet[3215]: E0430 03:29:38.071874 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.072115 kubelet[3215]: W0430 03:29:38.071890 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.072244 kubelet[3215]: E0430 03:29:38.072182 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.072900 kubelet[3215]: E0430 03:29:38.072883 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.072978 kubelet[3215]: W0430 03:29:38.072902 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.073024 kubelet[3215]: E0430 03:29:38.072918 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.087212 systemd[1]: Started cri-containerd-e22776191c37db9613ed7a11d6a38c4fd64c55b099f0b4080f9ecc36fc0d276e.scope - libcontainer container e22776191c37db9613ed7a11d6a38c4fd64c55b099f0b4080f9ecc36fc0d276e. Apr 30 03:29:38.091559 containerd[1674]: time="2025-04-30T03:29:38.091509552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bd5vn,Uid:b068409d-772c-47d4-9ea6-2d9868ae1737,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:38.094373 kubelet[3215]: E0430 03:29:38.094347 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:38.094373 kubelet[3215]: W0430 03:29:38.094370 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:38.094491 kubelet[3215]: E0430 03:29:38.094403 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:38.156247 containerd[1674]: time="2025-04-30T03:29:38.155656091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:38.156247 containerd[1674]: time="2025-04-30T03:29:38.155767192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:38.156247 containerd[1674]: time="2025-04-30T03:29:38.155791392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:38.156247 containerd[1674]: time="2025-04-30T03:29:38.156023494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:38.196801 systemd[1]: Started cri-containerd-3c4f79cb78ade810f07564e37154d51c7d56dfeaa01b0e3fc2f53b4ea2370b2a.scope - libcontainer container 3c4f79cb78ade810f07564e37154d51c7d56dfeaa01b0e3fc2f53b4ea2370b2a. Apr 30 03:29:38.199326 containerd[1674]: time="2025-04-30T03:29:38.199022322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66dd6747d8-sg22f,Uid:dd3ded02-47ea-4d27-8d20-73c272393e35,Namespace:calico-system,Attempt:0,} returns sandbox id \"e22776191c37db9613ed7a11d6a38c4fd64c55b099f0b4080f9ecc36fc0d276e\"" Apr 30 03:29:38.201711 containerd[1674]: time="2025-04-30T03:29:38.201454947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:29:38.238063 containerd[1674]: time="2025-04-30T03:29:38.237949310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bd5vn,Uid:b068409d-772c-47d4-9ea6-2d9868ae1737,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c4f79cb78ade810f07564e37154d51c7d56dfeaa01b0e3fc2f53b4ea2370b2a\"" Apr 30 03:29:39.523497 kubelet[3215]: E0430 03:29:39.523439 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kz4tb" podUID="dcdc5f6d-cefa-4e15-8498-441a243c70ee" Apr 30 03:29:40.182087 containerd[1674]: time="2025-04-30T03:29:40.182042457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:40.184824 containerd[1674]: time="2025-04-30T03:29:40.184758684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:29:40.188532 containerd[1674]: time="2025-04-30T03:29:40.188483521Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:40.197145 containerd[1674]: time="2025-04-30T03:29:40.197013406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 1.995521059s" Apr 30 03:29:40.197145 containerd[1674]: time="2025-04-30T03:29:40.197053307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:29:40.197527 containerd[1674]: time="2025-04-30T03:29:40.197379410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:40.199720 containerd[1674]: time="2025-04-30T03:29:40.199695833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:29:40.215538 containerd[1674]: time="2025-04-30T03:29:40.215504790Z" level=info msg="CreateContainer within sandbox \"e22776191c37db9613ed7a11d6a38c4fd64c55b099f0b4080f9ecc36fc0d276e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:29:40.266202 containerd[1674]: time="2025-04-30T03:29:40.266157294Z" level=info msg="CreateContainer within sandbox \"e22776191c37db9613ed7a11d6a38c4fd64c55b099f0b4080f9ecc36fc0d276e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5f5d34e75f871d1e2f9d9d6a8abaee8ee6e98567585ee98f00ad28ee77b1d932\"" Apr 30 03:29:40.266730 containerd[1674]: time="2025-04-30T03:29:40.266692800Z" level=info msg="StartContainer for \"5f5d34e75f871d1e2f9d9d6a8abaee8ee6e98567585ee98f00ad28ee77b1d932\"" Apr 30 03:29:40.298772 systemd[1]: Started cri-containerd-5f5d34e75f871d1e2f9d9d6a8abaee8ee6e98567585ee98f00ad28ee77b1d932.scope - libcontainer container 5f5d34e75f871d1e2f9d9d6a8abaee8ee6e98567585ee98f00ad28ee77b1d932. Apr 30 03:29:40.345435 containerd[1674]: time="2025-04-30T03:29:40.345044180Z" level=info msg="StartContainer for \"5f5d34e75f871d1e2f9d9d6a8abaee8ee6e98567585ee98f00ad28ee77b1d932\" returns successfully" Apr 30 03:29:40.666025 kubelet[3215]: E0430 03:29:40.665999 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.666025 kubelet[3215]: W0430 03:29:40.666019 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.666724 kubelet[3215]: E0430 03:29:40.666041 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.666724 kubelet[3215]: E0430 03:29:40.666276 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.666724 kubelet[3215]: W0430 03:29:40.666288 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.666724 kubelet[3215]: E0430 03:29:40.666303 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.666724 kubelet[3215]: E0430 03:29:40.666504 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.666724 kubelet[3215]: W0430 03:29:40.666515 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.666724 kubelet[3215]: E0430 03:29:40.666529 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.667197 kubelet[3215]: E0430 03:29:40.666746 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.667197 kubelet[3215]: W0430 03:29:40.666757 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.667197 kubelet[3215]: E0430 03:29:40.666773 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.667197 kubelet[3215]: E0430 03:29:40.667004 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.667197 kubelet[3215]: W0430 03:29:40.667015 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.667197 kubelet[3215]: E0430 03:29:40.667030 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.667600 kubelet[3215]: E0430 03:29:40.667213 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.667600 kubelet[3215]: W0430 03:29:40.667224 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.667600 kubelet[3215]: E0430 03:29:40.667237 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.667600 kubelet[3215]: E0430 03:29:40.667419 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.667600 kubelet[3215]: W0430 03:29:40.667430 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.667600 kubelet[3215]: E0430 03:29:40.667441 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.667917 kubelet[3215]: E0430 03:29:40.667634 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.667917 kubelet[3215]: W0430 03:29:40.667645 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.667917 kubelet[3215]: E0430 03:29:40.667657 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.667917 kubelet[3215]: E0430 03:29:40.667858 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.667917 kubelet[3215]: W0430 03:29:40.667867 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.667917 kubelet[3215]: E0430 03:29:40.667879 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.668175 kubelet[3215]: E0430 03:29:40.668092 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.668175 kubelet[3215]: W0430 03:29:40.668102 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.668175 kubelet[3215]: E0430 03:29:40.668114 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.668317 kubelet[3215]: E0430 03:29:40.668298 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.668317 kubelet[3215]: W0430 03:29:40.668307 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.668422 kubelet[3215]: E0430 03:29:40.668317 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.668537 kubelet[3215]: E0430 03:29:40.668514 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.668537 kubelet[3215]: W0430 03:29:40.668529 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.668690 kubelet[3215]: E0430 03:29:40.668542 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.668822 kubelet[3215]: E0430 03:29:40.668800 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.668822 kubelet[3215]: W0430 03:29:40.668815 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.668996 kubelet[3215]: E0430 03:29:40.668827 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.669082 kubelet[3215]: E0430 03:29:40.669019 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.669082 kubelet[3215]: W0430 03:29:40.669033 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.669082 kubelet[3215]: E0430 03:29:40.669045 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.669347 kubelet[3215]: E0430 03:29:40.669274 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.669347 kubelet[3215]: W0430 03:29:40.669285 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.669347 kubelet[3215]: E0430 03:29:40.669298 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.679627 kubelet[3215]: E0430 03:29:40.679602 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.679627 kubelet[3215]: W0430 03:29:40.679623 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.679840 kubelet[3215]: E0430 03:29:40.679642 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.680036 kubelet[3215]: E0430 03:29:40.680014 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.680036 kubelet[3215]: W0430 03:29:40.680031 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.680341 kubelet[3215]: E0430 03:29:40.680055 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.680468 kubelet[3215]: E0430 03:29:40.680406 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.680468 kubelet[3215]: W0430 03:29:40.680421 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.680468 kubelet[3215]: E0430 03:29:40.680444 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.680840 kubelet[3215]: E0430 03:29:40.680816 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.680840 kubelet[3215]: W0430 03:29:40.680834 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.681139 kubelet[3215]: E0430 03:29:40.680968 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.681225 kubelet[3215]: E0430 03:29:40.681216 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.681364 kubelet[3215]: W0430 03:29:40.681230 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.681364 kubelet[3215]: E0430 03:29:40.681268 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.681583 kubelet[3215]: E0430 03:29:40.681503 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.681583 kubelet[3215]: W0430 03:29:40.681516 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.681583 kubelet[3215]: E0430 03:29:40.681543 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.681918 kubelet[3215]: E0430 03:29:40.681779 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.681918 kubelet[3215]: W0430 03:29:40.681792 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.681918 kubelet[3215]: E0430 03:29:40.681821 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.682184 kubelet[3215]: E0430 03:29:40.682069 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.682184 kubelet[3215]: W0430 03:29:40.682083 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.682184 kubelet[3215]: E0430 03:29:40.682105 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.682429 kubelet[3215]: E0430 03:29:40.682358 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.682429 kubelet[3215]: W0430 03:29:40.682372 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.682429 kubelet[3215]: E0430 03:29:40.682395 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.682961 kubelet[3215]: E0430 03:29:40.682940 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.682961 kubelet[3215]: W0430 03:29:40.682957 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.683108 kubelet[3215]: E0430 03:29:40.683010 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.683298 kubelet[3215]: E0430 03:29:40.683254 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.683298 kubelet[3215]: W0430 03:29:40.683295 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.683482 kubelet[3215]: E0430 03:29:40.683392 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.683694 kubelet[3215]: E0430 03:29:40.683667 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.683694 kubelet[3215]: W0430 03:29:40.683685 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.683863 kubelet[3215]: E0430 03:29:40.683801 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.684012 kubelet[3215]: E0430 03:29:40.683993 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.684012 kubelet[3215]: W0430 03:29:40.684008 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.684185 kubelet[3215]: E0430 03:29:40.684042 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.684351 kubelet[3215]: E0430 03:29:40.684332 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.684351 kubelet[3215]: W0430 03:29:40.684348 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.684772 kubelet[3215]: E0430 03:29:40.684382 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.685044 kubelet[3215]: E0430 03:29:40.685024 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.685044 kubelet[3215]: W0430 03:29:40.685041 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.685236 kubelet[3215]: E0430 03:29:40.685063 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.685384 kubelet[3215]: E0430 03:29:40.685365 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.685499 kubelet[3215]: W0430 03:29:40.685382 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.685499 kubelet[3215]: E0430 03:29:40.685464 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.685906 kubelet[3215]: E0430 03:29:40.685884 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.685906 kubelet[3215]: W0430 03:29:40.685901 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.686051 kubelet[3215]: E0430 03:29:40.685934 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:40.686223 kubelet[3215]: E0430 03:29:40.686196 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:40.686315 kubelet[3215]: W0430 03:29:40.686293 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:40.686384 kubelet[3215]: E0430 03:29:40.686314 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.523471 kubelet[3215]: E0430 03:29:41.523391 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kz4tb" podUID="dcdc5f6d-cefa-4e15-8498-441a243c70ee" Apr 30 03:29:41.612504 kubelet[3215]: I0430 03:29:41.612257 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:41.675828 kubelet[3215]: E0430 03:29:41.675793 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.675828 kubelet[3215]: W0430 03:29:41.675816 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.678026 kubelet[3215]: E0430 03:29:41.675840 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.678026 kubelet[3215]: E0430 03:29:41.676130 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.678026 kubelet[3215]: W0430 03:29:41.676144 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.678026 kubelet[3215]: E0430 03:29:41.676160 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.678026 kubelet[3215]: E0430 03:29:41.676374 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.678026 kubelet[3215]: W0430 03:29:41.676386 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.678026 kubelet[3215]: E0430 03:29:41.676400 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.678026 kubelet[3215]: E0430 03:29:41.676609 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.678026 kubelet[3215]: W0430 03:29:41.676619 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.678026 kubelet[3215]: E0430 03:29:41.676628 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.679364 kubelet[3215]: E0430 03:29:41.677089 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.679364 kubelet[3215]: W0430 03:29:41.677103 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.679364 kubelet[3215]: E0430 03:29:41.677127 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.679364 kubelet[3215]: E0430 03:29:41.677303 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.679364 kubelet[3215]: W0430 03:29:41.677313 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.679364 kubelet[3215]: E0430 03:29:41.677332 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.679364 kubelet[3215]: E0430 03:29:41.677496 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.679364 kubelet[3215]: W0430 03:29:41.677504 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.679364 kubelet[3215]: E0430 03:29:41.677521 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.679364 kubelet[3215]: E0430 03:29:41.677735 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.679995 kubelet[3215]: W0430 03:29:41.677745 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.679995 kubelet[3215]: E0430 03:29:41.677763 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.679995 kubelet[3215]: E0430 03:29:41.678866 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.679995 kubelet[3215]: W0430 03:29:41.678883 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.679995 kubelet[3215]: E0430 03:29:41.678919 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.679995 kubelet[3215]: E0430 03:29:41.679447 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.679995 kubelet[3215]: W0430 03:29:41.679461 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.679995 kubelet[3215]: E0430 03:29:41.679475 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.680449 kubelet[3215]: E0430 03:29:41.680435 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.680449 kubelet[3215]: W0430 03:29:41.680483 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.680449 kubelet[3215]: E0430 03:29:41.680498 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.681217 kubelet[3215]: E0430 03:29:41.681097 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.681217 kubelet[3215]: W0430 03:29:41.681111 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.681217 kubelet[3215]: E0430 03:29:41.681124 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.681494 kubelet[3215]: E0430 03:29:41.681482 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.681647 kubelet[3215]: W0430 03:29:41.681558 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.681647 kubelet[3215]: E0430 03:29:41.681572 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.682042 kubelet[3215]: E0430 03:29:41.681927 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.682042 kubelet[3215]: W0430 03:29:41.681941 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.682042 kubelet[3215]: E0430 03:29:41.681961 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.682652 kubelet[3215]: E0430 03:29:41.682378 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.682652 kubelet[3215]: W0430 03:29:41.682392 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.682652 kubelet[3215]: E0430 03:29:41.682412 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.687887 kubelet[3215]: E0430 03:29:41.687869 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.687887 kubelet[3215]: W0430 03:29:41.687884 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.688049 kubelet[3215]: E0430 03:29:41.687898 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.688161 kubelet[3215]: E0430 03:29:41.688139 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.688161 kubelet[3215]: W0430 03:29:41.688155 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.688261 kubelet[3215]: E0430 03:29:41.688173 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.688445 kubelet[3215]: E0430 03:29:41.688425 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.688511 kubelet[3215]: W0430 03:29:41.688442 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.688511 kubelet[3215]: E0430 03:29:41.688491 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.688762 kubelet[3215]: E0430 03:29:41.688746 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.688762 kubelet[3215]: W0430 03:29:41.688759 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.688876 kubelet[3215]: E0430 03:29:41.688777 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.689018 kubelet[3215]: E0430 03:29:41.689001 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.689018 kubelet[3215]: W0430 03:29:41.689014 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.689154 kubelet[3215]: E0430 03:29:41.689033 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.689265 kubelet[3215]: E0430 03:29:41.689250 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.689265 kubelet[3215]: W0430 03:29:41.689263 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.689611 kubelet[3215]: E0430 03:29:41.689517 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.689611 kubelet[3215]: E0430 03:29:41.689551 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.689611 kubelet[3215]: W0430 03:29:41.689560 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.689860 kubelet[3215]: E0430 03:29:41.689642 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.689860 kubelet[3215]: E0430 03:29:41.689795 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.689860 kubelet[3215]: W0430 03:29:41.689805 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.690056 kubelet[3215]: E0430 03:29:41.689880 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.690056 kubelet[3215]: E0430 03:29:41.690028 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.690056 kubelet[3215]: W0430 03:29:41.690039 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.690056 kubelet[3215]: E0430 03:29:41.690054 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.690440 kubelet[3215]: E0430 03:29:41.690416 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.690440 kubelet[3215]: W0430 03:29:41.690436 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.690565 kubelet[3215]: E0430 03:29:41.690454 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.690714 kubelet[3215]: E0430 03:29:41.690697 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.690714 kubelet[3215]: W0430 03:29:41.690710 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.690818 kubelet[3215]: E0430 03:29:41.690723 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.690980 kubelet[3215]: E0430 03:29:41.690964 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.690980 kubelet[3215]: W0430 03:29:41.690977 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.691079 kubelet[3215]: E0430 03:29:41.691062 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.691366 kubelet[3215]: E0430 03:29:41.691350 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.691366 kubelet[3215]: W0430 03:29:41.691363 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.691478 kubelet[3215]: E0430 03:29:41.691450 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.691637 kubelet[3215]: E0430 03:29:41.691620 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.691637 kubelet[3215]: W0430 03:29:41.691633 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.691772 kubelet[3215]: E0430 03:29:41.691720 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.691880 kubelet[3215]: E0430 03:29:41.691863 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.691880 kubelet[3215]: W0430 03:29:41.691877 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.691986 kubelet[3215]: E0430 03:29:41.691895 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.692179 kubelet[3215]: E0430 03:29:41.692161 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.692179 kubelet[3215]: W0430 03:29:41.692175 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.692374 kubelet[3215]: E0430 03:29:41.692188 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.692432 kubelet[3215]: E0430 03:29:41.692387 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.692432 kubelet[3215]: W0430 03:29:41.692399 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.692432 kubelet[3215]: E0430 03:29:41.692411 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.693123 kubelet[3215]: E0430 03:29:41.693094 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:41.693123 kubelet[3215]: W0430 03:29:41.693111 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:41.693123 kubelet[3215]: E0430 03:29:41.693124 3215 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:41.695379 containerd[1674]: time="2025-04-30T03:29:41.695337218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:41.697476 containerd[1674]: time="2025-04-30T03:29:41.697341638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:29:41.701294 containerd[1674]: time="2025-04-30T03:29:41.701233176Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:41.704454 containerd[1674]: time="2025-04-30T03:29:41.704406108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:41.705661 containerd[1674]: time="2025-04-30T03:29:41.705080315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.50523658s" Apr 30 03:29:41.705661 containerd[1674]: time="2025-04-30T03:29:41.705120215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:29:41.707860 containerd[1674]: time="2025-04-30T03:29:41.707832842Z" level=info msg="CreateContainer within sandbox \"3c4f79cb78ade810f07564e37154d51c7d56dfeaa01b0e3fc2f53b4ea2370b2a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:29:41.737891 containerd[1674]: time="2025-04-30T03:29:41.737858641Z" level=info msg="CreateContainer within sandbox \"3c4f79cb78ade810f07564e37154d51c7d56dfeaa01b0e3fc2f53b4ea2370b2a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1632f50994566974fcf7343e1eb5954eb8d2ad190963bec70ee4ff8e5b7176b8\"" Apr 30 03:29:41.738293 containerd[1674]: time="2025-04-30T03:29:41.738210444Z" level=info msg="StartContainer for \"1632f50994566974fcf7343e1eb5954eb8d2ad190963bec70ee4ff8e5b7176b8\"" Apr 30 03:29:41.774739 systemd[1]: Started cri-containerd-1632f50994566974fcf7343e1eb5954eb8d2ad190963bec70ee4ff8e5b7176b8.scope - libcontainer container 1632f50994566974fcf7343e1eb5954eb8d2ad190963bec70ee4ff8e5b7176b8. Apr 30 03:29:41.819000 containerd[1674]: time="2025-04-30T03:29:41.817141530Z" level=info msg="StartContainer for \"1632f50994566974fcf7343e1eb5954eb8d2ad190963bec70ee4ff8e5b7176b8\" returns successfully" Apr 30 03:29:41.840870 systemd[1]: cri-containerd-1632f50994566974fcf7343e1eb5954eb8d2ad190963bec70ee4ff8e5b7176b8.scope: Deactivated successfully. Apr 30 03:29:42.205259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1632f50994566974fcf7343e1eb5954eb8d2ad190963bec70ee4ff8e5b7176b8-rootfs.mount: Deactivated successfully. Apr 30 03:29:42.630362 kubelet[3215]: I0430 03:29:42.630089 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66dd6747d8-sg22f" podStartSLOduration=3.632648542 podStartE2EDuration="5.63006782s" podCreationTimestamp="2025-04-30 03:29:37 +0000 UTC" firstStartedPulling="2025-04-30 03:29:38.200991942 +0000 UTC m=+19.776383352" lastFinishedPulling="2025-04-30 03:29:40.19841122 +0000 UTC m=+21.773802630" observedRunningTime="2025-04-30 03:29:40.623513051 +0000 UTC m=+22.198904561" watchObservedRunningTime="2025-04-30 03:29:42.63006782 +0000 UTC m=+24.205459330" Apr 30 03:29:43.253107 containerd[1674]: time="2025-04-30T03:29:43.253018662Z" level=info msg="shim disconnected" id=1632f50994566974fcf7343e1eb5954eb8d2ad190963bec70ee4ff8e5b7176b8 namespace=k8s.io Apr 30 03:29:43.253107 containerd[1674]: time="2025-04-30T03:29:43.253101563Z" level=warning msg="cleaning up after shim disconnected" id=1632f50994566974fcf7343e1eb5954eb8d2ad190963bec70ee4ff8e5b7176b8 namespace=k8s.io Apr 30 03:29:43.253107 containerd[1674]: time="2025-04-30T03:29:43.253113563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:43.523375 kubelet[3215]: E0430 03:29:43.523209 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kz4tb" podUID="dcdc5f6d-cefa-4e15-8498-441a243c70ee" Apr 30 03:29:43.620047 containerd[1674]: time="2025-04-30T03:29:43.619929776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:29:45.522794 kubelet[3215]: E0430 03:29:45.522712 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kz4tb" podUID="dcdc5f6d-cefa-4e15-8498-441a243c70ee" Apr 30 03:29:47.523170 kubelet[3215]: E0430 03:29:47.523072 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kz4tb" podUID="dcdc5f6d-cefa-4e15-8498-441a243c70ee" Apr 30 03:29:49.113754 containerd[1674]: time="2025-04-30T03:29:49.113708088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:49.116301 containerd[1674]: time="2025-04-30T03:29:49.116230412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:29:49.119204 containerd[1674]: time="2025-04-30T03:29:49.119151241Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:49.123361 containerd[1674]: time="2025-04-30T03:29:49.123312782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:49.124095 containerd[1674]: time="2025-04-30T03:29:49.123970689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.503168604s" Apr 30 03:29:49.124095 containerd[1674]: time="2025-04-30T03:29:49.124010389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:29:49.126467 containerd[1674]: time="2025-04-30T03:29:49.126441313Z" level=info msg="CreateContainer within sandbox \"3c4f79cb78ade810f07564e37154d51c7d56dfeaa01b0e3fc2f53b4ea2370b2a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:29:49.165004 containerd[1674]: time="2025-04-30T03:29:49.164909592Z" level=info msg="CreateContainer within sandbox \"3c4f79cb78ade810f07564e37154d51c7d56dfeaa01b0e3fc2f53b4ea2370b2a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c1836ae165a811cab741fefb53556bfef6dbe69a9f371e43a3152170a35229b1\"" Apr 30 03:29:49.166911 containerd[1674]: time="2025-04-30T03:29:49.166684709Z" level=info msg="StartContainer for \"c1836ae165a811cab741fefb53556bfef6dbe69a9f371e43a3152170a35229b1\"" Apr 30 03:29:49.200744 systemd[1]: Started cri-containerd-c1836ae165a811cab741fefb53556bfef6dbe69a9f371e43a3152170a35229b1.scope - libcontainer container c1836ae165a811cab741fefb53556bfef6dbe69a9f371e43a3152170a35229b1. Apr 30 03:29:49.229213 containerd[1674]: time="2025-04-30T03:29:49.229092324Z" level=info msg="StartContainer for \"c1836ae165a811cab741fefb53556bfef6dbe69a9f371e43a3152170a35229b1\" returns successfully" Apr 30 03:29:49.523281 kubelet[3215]: E0430 03:29:49.523222 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kz4tb" podUID="dcdc5f6d-cefa-4e15-8498-441a243c70ee" Apr 30 03:29:50.770265 containerd[1674]: time="2025-04-30T03:29:50.770212686Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:29:50.773727 systemd[1]: cri-containerd-c1836ae165a811cab741fefb53556bfef6dbe69a9f371e43a3152170a35229b1.scope: Deactivated successfully. Apr 30 03:29:50.796378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1836ae165a811cab741fefb53556bfef6dbe69a9f371e43a3152170a35229b1-rootfs.mount: Deactivated successfully. Apr 30 03:29:50.845868 kubelet[3215]: I0430 03:29:50.844367 3215 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:29:51.306414 kubelet[3215]: I0430 03:29:50.887573 3215 topology_manager.go:215] "Topology Admit Handler" podUID="a95e4c6a-c6b4-4ac1-a191-4c74878eb86f" podNamespace="calico-system" podName="calico-kube-controllers-f7478c6c9-ncjxs" Apr 30 03:29:51.306414 kubelet[3215]: I0430 03:29:50.889959 3215 topology_manager.go:215] "Topology Admit Handler" podUID="c307cc16-3906-4e89-a216-d52cd8df720e" podNamespace="calico-apiserver" podName="calico-apiserver-5bc9f7b477-z2mdz" Apr 30 03:29:51.306414 kubelet[3215]: I0430 03:29:50.890724 3215 topology_manager.go:215] "Topology Admit Handler" podUID="9774df4c-daf4-44bc-bfa3-9191c38f8346" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gbccc" Apr 30 03:29:51.306414 kubelet[3215]: I0430 03:29:50.893538 3215 topology_manager.go:215] "Topology Admit Handler" podUID="a78b558b-dccc-4c43-976d-3e3ed712a212" podNamespace="kube-system" podName="coredns-7db6d8ff4d-msddw" Apr 30 03:29:51.306414 kubelet[3215]: I0430 03:29:50.894137 3215 topology_manager.go:215] "Topology Admit Handler" podUID="a556b718-e37a-4703-8148-82b2fa7a6e46" podNamespace="calico-apiserver" podName="calico-apiserver-5bc9f7b477-ksgxv" Apr 30 03:29:51.306414 kubelet[3215]: I0430 03:29:51.046227 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9774df4c-daf4-44bc-bfa3-9191c38f8346-config-volume\") pod \"coredns-7db6d8ff4d-gbccc\" (UID: \"9774df4c-daf4-44bc-bfa3-9191c38f8346\") " pod="kube-system/coredns-7db6d8ff4d-gbccc" Apr 30 03:29:51.306414 kubelet[3215]: I0430 03:29:51.046279 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c307cc16-3906-4e89-a216-d52cd8df720e-calico-apiserver-certs\") pod \"calico-apiserver-5bc9f7b477-z2mdz\" (UID: \"c307cc16-3906-4e89-a216-d52cd8df720e\") " pod="calico-apiserver/calico-apiserver-5bc9f7b477-z2mdz" Apr 30 03:29:50.905364 systemd[1]: Created slice kubepods-besteffort-poda95e4c6a_c6b4_4ac1_a191_4c74878eb86f.slice - libcontainer container kubepods-besteffort-poda95e4c6a_c6b4_4ac1_a191_4c74878eb86f.slice. Apr 30 03:29:51.306945 kubelet[3215]: I0430 03:29:51.046318 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a95e4c6a-c6b4-4ac1-a191-4c74878eb86f-tigera-ca-bundle\") pod \"calico-kube-controllers-f7478c6c9-ncjxs\" (UID: \"a95e4c6a-c6b4-4ac1-a191-4c74878eb86f\") " pod="calico-system/calico-kube-controllers-f7478c6c9-ncjxs" Apr 30 03:29:51.306945 kubelet[3215]: I0430 03:29:51.046352 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a556b718-e37a-4703-8148-82b2fa7a6e46-calico-apiserver-certs\") pod \"calico-apiserver-5bc9f7b477-ksgxv\" (UID: \"a556b718-e37a-4703-8148-82b2fa7a6e46\") " pod="calico-apiserver/calico-apiserver-5bc9f7b477-ksgxv" Apr 30 03:29:51.306945 kubelet[3215]: I0430 03:29:51.046386 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfpjz\" (UniqueName: \"kubernetes.io/projected/9774df4c-daf4-44bc-bfa3-9191c38f8346-kube-api-access-wfpjz\") pod \"coredns-7db6d8ff4d-gbccc\" (UID: \"9774df4c-daf4-44bc-bfa3-9191c38f8346\") " pod="kube-system/coredns-7db6d8ff4d-gbccc" Apr 30 03:29:51.306945 kubelet[3215]: I0430 03:29:51.046408 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpw9t\" (UniqueName: \"kubernetes.io/projected/c307cc16-3906-4e89-a216-d52cd8df720e-kube-api-access-bpw9t\") pod \"calico-apiserver-5bc9f7b477-z2mdz\" (UID: \"c307cc16-3906-4e89-a216-d52cd8df720e\") " pod="calico-apiserver/calico-apiserver-5bc9f7b477-z2mdz" Apr 30 03:29:51.306945 kubelet[3215]: I0430 03:29:51.046435 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a78b558b-dccc-4c43-976d-3e3ed712a212-config-volume\") pod \"coredns-7db6d8ff4d-msddw\" (UID: \"a78b558b-dccc-4c43-976d-3e3ed712a212\") " pod="kube-system/coredns-7db6d8ff4d-msddw" Apr 30 03:29:50.912284 systemd[1]: Created slice kubepods-burstable-pod9774df4c_daf4_44bc_bfa3_9191c38f8346.slice - libcontainer container kubepods-burstable-pod9774df4c_daf4_44bc_bfa3_9191c38f8346.slice. Apr 30 03:29:51.307243 kubelet[3215]: I0430 03:29:51.046465 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp492\" (UniqueName: \"kubernetes.io/projected/a78b558b-dccc-4c43-976d-3e3ed712a212-kube-api-access-qp492\") pod \"coredns-7db6d8ff4d-msddw\" (UID: \"a78b558b-dccc-4c43-976d-3e3ed712a212\") " pod="kube-system/coredns-7db6d8ff4d-msddw" Apr 30 03:29:51.307243 kubelet[3215]: I0430 03:29:51.046515 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4blq\" (UniqueName: \"kubernetes.io/projected/a95e4c6a-c6b4-4ac1-a191-4c74878eb86f-kube-api-access-k4blq\") pod \"calico-kube-controllers-f7478c6c9-ncjxs\" (UID: \"a95e4c6a-c6b4-4ac1-a191-4c74878eb86f\") " pod="calico-system/calico-kube-controllers-f7478c6c9-ncjxs" Apr 30 03:29:51.307243 kubelet[3215]: I0430 03:29:51.046546 3215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw6h5\" (UniqueName: \"kubernetes.io/projected/a556b718-e37a-4703-8148-82b2fa7a6e46-kube-api-access-sw6h5\") pod \"calico-apiserver-5bc9f7b477-ksgxv\" (UID: \"a556b718-e37a-4703-8148-82b2fa7a6e46\") " pod="calico-apiserver/calico-apiserver-5bc9f7b477-ksgxv" Apr 30 03:29:50.919972 systemd[1]: Created slice kubepods-besteffort-podc307cc16_3906_4e89_a216_d52cd8df720e.slice - libcontainer container kubepods-besteffort-podc307cc16_3906_4e89_a216_d52cd8df720e.slice. Apr 30 03:29:50.931648 systemd[1]: Created slice kubepods-burstable-poda78b558b_dccc_4c43_976d_3e3ed712a212.slice - libcontainer container kubepods-burstable-poda78b558b_dccc_4c43_976d_3e3ed712a212.slice. Apr 30 03:29:50.937492 systemd[1]: Created slice kubepods-besteffort-poda556b718_e37a_4703_8148_82b2fa7a6e46.slice - libcontainer container kubepods-besteffort-poda556b718_e37a_4703_8148_82b2fa7a6e46.slice. Apr 30 03:29:51.529065 systemd[1]: Created slice kubepods-besteffort-poddcdc5f6d_cefa_4e15_8498_441a243c70ee.slice - libcontainer container kubepods-besteffort-poddcdc5f6d_cefa_4e15_8498_441a243c70ee.slice. Apr 30 03:29:51.532009 containerd[1674]: time="2025-04-30T03:29:51.531670685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kz4tb,Uid:dcdc5f6d-cefa-4e15-8498-441a243c70ee,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:51.606878 containerd[1674]: time="2025-04-30T03:29:51.606726705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7478c6c9-ncjxs,Uid:a95e4c6a-c6b4-4ac1-a191-4c74878eb86f,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:51.609620 containerd[1674]: time="2025-04-30T03:29:51.609375530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc9f7b477-z2mdz,Uid:c307cc16-3906-4e89-a216-d52cd8df720e,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:51.609620 containerd[1674]: time="2025-04-30T03:29:51.609500931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc9f7b477-ksgxv,Uid:a556b718-e37a-4703-8148-82b2fa7a6e46,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:51.610198 containerd[1674]: time="2025-04-30T03:29:51.610170638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-msddw,Uid:a78b558b-dccc-4c43-976d-3e3ed712a212,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:51.625003 containerd[1674]: time="2025-04-30T03:29:51.624850779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gbccc,Uid:9774df4c-daf4-44bc-bfa3-9191c38f8346,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:52.596886 containerd[1674]: time="2025-04-30T03:29:52.596626094Z" level=info msg="shim disconnected" id=c1836ae165a811cab741fefb53556bfef6dbe69a9f371e43a3152170a35229b1 namespace=k8s.io Apr 30 03:29:52.596886 containerd[1674]: time="2025-04-30T03:29:52.596695794Z" level=warning msg="cleaning up after shim disconnected" id=c1836ae165a811cab741fefb53556bfef6dbe69a9f371e43a3152170a35229b1 namespace=k8s.io Apr 30 03:29:52.596886 containerd[1674]: time="2025-04-30T03:29:52.596707994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:52.639337 containerd[1674]: time="2025-04-30T03:29:52.639231902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:29:52.967982 containerd[1674]: time="2025-04-30T03:29:52.967929253Z" level=error msg="Failed to destroy network for sandbox \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.968682 containerd[1674]: time="2025-04-30T03:29:52.968031654Z" level=error msg="Failed to destroy network for sandbox \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.968682 containerd[1674]: time="2025-04-30T03:29:52.968462558Z" level=error msg="encountered an error cleaning up failed sandbox \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.968682 containerd[1674]: time="2025-04-30T03:29:52.968537459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7478c6c9-ncjxs,Uid:a95e4c6a-c6b4-4ac1-a191-4c74878eb86f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.970638 kubelet[3215]: E0430 03:29:52.969842 3215 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.970638 kubelet[3215]: E0430 03:29:52.969932 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f7478c6c9-ncjxs" Apr 30 03:29:52.970638 kubelet[3215]: E0430 03:29:52.969965 3215 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f7478c6c9-ncjxs" Apr 30 03:29:52.971133 kubelet[3215]: E0430 03:29:52.970014 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f7478c6c9-ncjxs_calico-system(a95e4c6a-c6b4-4ac1-a191-4c74878eb86f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f7478c6c9-ncjxs_calico-system(a95e4c6a-c6b4-4ac1-a191-4c74878eb86f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f7478c6c9-ncjxs" podUID="a95e4c6a-c6b4-4ac1-a191-4c74878eb86f" Apr 30 03:29:52.973944 containerd[1674]: time="2025-04-30T03:29:52.972766999Z" level=error msg="encountered an error cleaning up failed sandbox \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.973944 containerd[1674]: time="2025-04-30T03:29:52.972874600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc9f7b477-ksgxv,Uid:a556b718-e37a-4703-8148-82b2fa7a6e46,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.974116 kubelet[3215]: E0430 03:29:52.973045 3215 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.974116 kubelet[3215]: E0430 03:29:52.973097 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bc9f7b477-ksgxv" Apr 30 03:29:52.974116 kubelet[3215]: E0430 03:29:52.973121 3215 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bc9f7b477-ksgxv" Apr 30 03:29:52.974261 kubelet[3215]: E0430 03:29:52.973161 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bc9f7b477-ksgxv_calico-apiserver(a556b718-e37a-4703-8148-82b2fa7a6e46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bc9f7b477-ksgxv_calico-apiserver(a556b718-e37a-4703-8148-82b2fa7a6e46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bc9f7b477-ksgxv" podUID="a556b718-e37a-4703-8148-82b2fa7a6e46" Apr 30 03:29:52.978246 containerd[1674]: time="2025-04-30T03:29:52.978213351Z" level=error msg="Failed to destroy network for sandbox \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.980015 containerd[1674]: time="2025-04-30T03:29:52.979889467Z" level=error msg="encountered an error cleaning up failed sandbox \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.980280 containerd[1674]: time="2025-04-30T03:29:52.980251371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gbccc,Uid:9774df4c-daf4-44bc-bfa3-9191c38f8346,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.980848 kubelet[3215]: E0430 03:29:52.980506 3215 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.980848 kubelet[3215]: E0430 03:29:52.980553 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gbccc" Apr 30 03:29:52.980848 kubelet[3215]: E0430 03:29:52.980580 3215 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gbccc" Apr 30 03:29:52.981029 kubelet[3215]: E0430 03:29:52.980786 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gbccc_kube-system(9774df4c-daf4-44bc-bfa3-9191c38f8346)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gbccc_kube-system(9774df4c-daf4-44bc-bfa3-9191c38f8346)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gbccc" podUID="9774df4c-daf4-44bc-bfa3-9191c38f8346" Apr 30 03:29:52.994846 containerd[1674]: time="2025-04-30T03:29:52.994674009Z" level=error msg="Failed to destroy network for sandbox \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.995922 containerd[1674]: time="2025-04-30T03:29:52.995886721Z" level=error msg="encountered an error cleaning up failed sandbox \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.996077 containerd[1674]: time="2025-04-30T03:29:52.996049722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc9f7b477-z2mdz,Uid:c307cc16-3906-4e89-a216-d52cd8df720e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.996749 kubelet[3215]: E0430 03:29:52.996360 3215 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.996749 kubelet[3215]: E0430 03:29:52.996407 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bc9f7b477-z2mdz" Apr 30 03:29:52.996749 kubelet[3215]: E0430 03:29:52.996469 3215 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bc9f7b477-z2mdz" Apr 30 03:29:52.996941 kubelet[3215]: E0430 03:29:52.996509 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bc9f7b477-z2mdz_calico-apiserver(c307cc16-3906-4e89-a216-d52cd8df720e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bc9f7b477-z2mdz_calico-apiserver(c307cc16-3906-4e89-a216-d52cd8df720e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bc9f7b477-z2mdz" podUID="c307cc16-3906-4e89-a216-d52cd8df720e" Apr 30 03:29:52.998627 containerd[1674]: time="2025-04-30T03:29:52.998176243Z" level=error msg="Failed to destroy network for sandbox \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.998627 containerd[1674]: time="2025-04-30T03:29:52.998479246Z" level=error msg="encountered an error cleaning up failed sandbox \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.998627 containerd[1674]: time="2025-04-30T03:29:52.998525546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kz4tb,Uid:dcdc5f6d-cefa-4e15-8498-441a243c70ee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.999037 kubelet[3215]: E0430 03:29:52.998883 3215 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:52.999037 kubelet[3215]: E0430 03:29:52.998929 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kz4tb" Apr 30 03:29:52.999037 kubelet[3215]: E0430 03:29:52.998952 3215 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kz4tb" Apr 30 03:29:52.999209 kubelet[3215]: E0430 03:29:52.998988 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kz4tb_calico-system(dcdc5f6d-cefa-4e15-8498-441a243c70ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kz4tb_calico-system(dcdc5f6d-cefa-4e15-8498-441a243c70ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kz4tb" podUID="dcdc5f6d-cefa-4e15-8498-441a243c70ee" Apr 30 03:29:53.003954 containerd[1674]: time="2025-04-30T03:29:53.003914198Z" level=error msg="Failed to destroy network for sandbox \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:53.004201 containerd[1674]: time="2025-04-30T03:29:53.004171900Z" level=error msg="encountered an error cleaning up failed sandbox \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:53.004270 containerd[1674]: time="2025-04-30T03:29:53.004223601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-msddw,Uid:a78b558b-dccc-4c43-976d-3e3ed712a212,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:53.004409 kubelet[3215]: E0430 03:29:53.004372 3215 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:53.004487 kubelet[3215]: E0430 03:29:53.004429 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-msddw" Apr 30 03:29:53.004487 kubelet[3215]: E0430 03:29:53.004453 3215 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-msddw" Apr 30 03:29:53.004582 kubelet[3215]: E0430 03:29:53.004512 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-msddw_kube-system(a78b558b-dccc-4c43-976d-3e3ed712a212)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-msddw_kube-system(a78b558b-dccc-4c43-976d-3e3ed712a212)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-msddw" podUID="a78b558b-dccc-4c43-976d-3e3ed712a212" Apr 30 03:29:53.640602 kubelet[3215]: I0430 03:29:53.640564 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:29:53.641662 containerd[1674]: time="2025-04-30T03:29:53.641420809Z" level=info msg="StopPodSandbox for \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\"" Apr 30 03:29:53.641662 containerd[1674]: time="2025-04-30T03:29:53.641647011Z" level=info msg="Ensure that sandbox 491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce in task-service has been cleanup successfully" Apr 30 03:29:53.646142 kubelet[3215]: I0430 03:29:53.645116 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:29:53.646258 containerd[1674]: time="2025-04-30T03:29:53.645764150Z" level=info msg="StopPodSandbox for \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\"" Apr 30 03:29:53.646258 containerd[1674]: time="2025-04-30T03:29:53.645958752Z" level=info msg="Ensure that sandbox 74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742 in task-service has been cleanup successfully" Apr 30 03:29:53.647319 kubelet[3215]: I0430 03:29:53.647294 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:29:53.648528 containerd[1674]: time="2025-04-30T03:29:53.648503076Z" level=info msg="StopPodSandbox for \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\"" Apr 30 03:29:53.648858 kubelet[3215]: I0430 03:29:53.648830 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:29:53.649248 containerd[1674]: time="2025-04-30T03:29:53.649223683Z" level=info msg="Ensure that sandbox 5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4 in task-service has been cleanup successfully" Apr 30 03:29:53.652947 containerd[1674]: time="2025-04-30T03:29:53.652921019Z" level=info msg="StopPodSandbox for \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\"" Apr 30 03:29:53.654135 kubelet[3215]: I0430 03:29:53.653503 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:29:53.654349 containerd[1674]: time="2025-04-30T03:29:53.654321832Z" level=info msg="Ensure that sandbox 486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8 in task-service has been cleanup successfully" Apr 30 03:29:53.655852 containerd[1674]: time="2025-04-30T03:29:53.655810846Z" level=info msg="StopPodSandbox for \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\"" Apr 30 03:29:53.657020 containerd[1674]: time="2025-04-30T03:29:53.656992958Z" level=info msg="Ensure that sandbox 2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63 in task-service has been cleanup successfully" Apr 30 03:29:53.665340 kubelet[3215]: I0430 03:29:53.664830 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:29:53.667751 containerd[1674]: time="2025-04-30T03:29:53.666897853Z" level=info msg="StopPodSandbox for \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\"" Apr 30 03:29:53.667751 containerd[1674]: time="2025-04-30T03:29:53.667111855Z" level=info msg="Ensure that sandbox a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986 in task-service has been cleanup successfully" Apr 30 03:29:53.721434 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8-shm.mount: Deactivated successfully. Apr 30 03:29:53.721553 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63-shm.mount: Deactivated successfully. Apr 30 03:29:53.721641 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986-shm.mount: Deactivated successfully. Apr 30 03:29:53.721718 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742-shm.mount: Deactivated successfully. Apr 30 03:29:53.721809 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce-shm.mount: Deactivated successfully. Apr 30 03:29:53.776790 containerd[1674]: time="2025-04-30T03:29:53.776732606Z" level=error msg="StopPodSandbox for \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\" failed" error="failed to destroy network for sandbox \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:53.777790 kubelet[3215]: E0430 03:29:53.777746 3215 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:29:53.777919 kubelet[3215]: E0430 03:29:53.777811 3215 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce"} Apr 30 03:29:53.777919 kubelet[3215]: E0430 03:29:53.777888 3215 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a556b718-e37a-4703-8148-82b2fa7a6e46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:53.778072 kubelet[3215]: E0430 03:29:53.777921 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a556b718-e37a-4703-8148-82b2fa7a6e46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bc9f7b477-ksgxv" podUID="a556b718-e37a-4703-8148-82b2fa7a6e46" Apr 30 03:29:53.790650 containerd[1674]: time="2025-04-30T03:29:53.790496537Z" level=error msg="StopPodSandbox for \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\" failed" error="failed to destroy network for sandbox \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:53.791583 kubelet[3215]: E0430 03:29:53.791430 3215 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:29:53.791583 kubelet[3215]: E0430 03:29:53.791584 3215 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4"} Apr 30 03:29:53.791957 kubelet[3215]: E0430 03:29:53.791652 3215 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9774df4c-daf4-44bc-bfa3-9191c38f8346\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:53.791957 kubelet[3215]: E0430 03:29:53.791684 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9774df4c-daf4-44bc-bfa3-9191c38f8346\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gbccc" podUID="9774df4c-daf4-44bc-bfa3-9191c38f8346" Apr 30 03:29:53.792347 containerd[1674]: time="2025-04-30T03:29:53.791800250Z" level=error msg="StopPodSandbox for \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\" failed" error="failed to destroy network for sandbox \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:53.792409 kubelet[3215]: E0430 03:29:53.792111 3215 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:29:53.792409 kubelet[3215]: E0430 03:29:53.792199 3215 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742"} Apr 30 03:29:53.792409 kubelet[3215]: E0430 03:29:53.792237 3215 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dcdc5f6d-cefa-4e15-8498-441a243c70ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:53.792409 kubelet[3215]: E0430 03:29:53.792264 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dcdc5f6d-cefa-4e15-8498-441a243c70ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kz4tb" podUID="dcdc5f6d-cefa-4e15-8498-441a243c70ee" Apr 30 03:29:53.797179 containerd[1674]: time="2025-04-30T03:29:53.796367394Z" level=error msg="StopPodSandbox for \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\" failed" error="failed to destroy network for sandbox \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:53.797280 kubelet[3215]: E0430 03:29:53.797242 3215 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:29:53.797372 kubelet[3215]: E0430 03:29:53.797277 3215 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63"} Apr 30 03:29:53.797372 kubelet[3215]: E0430 03:29:53.797314 3215 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c307cc16-3906-4e89-a216-d52cd8df720e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:53.798277 kubelet[3215]: E0430 03:29:53.797368 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c307cc16-3906-4e89-a216-d52cd8df720e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bc9f7b477-z2mdz" podUID="c307cc16-3906-4e89-a216-d52cd8df720e" Apr 30 03:29:53.798770 containerd[1674]: time="2025-04-30T03:29:53.798687816Z" level=error msg="StopPodSandbox for \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\" failed" error="failed to destroy network for sandbox \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:53.798981 containerd[1674]: time="2025-04-30T03:29:53.798950019Z" level=error msg="StopPodSandbox for \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\" failed" error="failed to destroy network for sandbox \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:53.799421 kubelet[3215]: E0430 03:29:53.799170 3215 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:29:53.799421 kubelet[3215]: E0430 03:29:53.799208 3215 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986"} Apr 30 03:29:53.799421 kubelet[3215]: E0430 03:29:53.799240 3215 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a95e4c6a-c6b4-4ac1-a191-4c74878eb86f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:53.799421 kubelet[3215]: E0430 03:29:53.799266 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a95e4c6a-c6b4-4ac1-a191-4c74878eb86f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f7478c6c9-ncjxs" podUID="a95e4c6a-c6b4-4ac1-a191-4c74878eb86f" Apr 30 03:29:53.799726 kubelet[3215]: E0430 03:29:53.799302 3215 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:29:53.799726 kubelet[3215]: E0430 03:29:53.799321 3215 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8"} Apr 30 03:29:53.799726 kubelet[3215]: E0430 03:29:53.799345 3215 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a78b558b-dccc-4c43-976d-3e3ed712a212\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:53.799726 kubelet[3215]: E0430 03:29:53.799369 3215 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a78b558b-dccc-4c43-976d-3e3ed712a212\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-msddw" podUID="a78b558b-dccc-4c43-976d-3e3ed712a212" Apr 30 03:29:58.524668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3464299812.mount: Deactivated successfully. Apr 30 03:29:58.573274 containerd[1674]: time="2025-04-30T03:29:58.573225383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:58.575141 containerd[1674]: time="2025-04-30T03:29:58.575090901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:29:58.578934 containerd[1674]: time="2025-04-30T03:29:58.578879637Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:58.582505 containerd[1674]: time="2025-04-30T03:29:58.582454971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:58.583403 containerd[1674]: time="2025-04-30T03:29:58.583004776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 5.943730274s" Apr 30 03:29:58.583403 containerd[1674]: time="2025-04-30T03:29:58.583044077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:29:58.597435 containerd[1674]: time="2025-04-30T03:29:58.597285813Z" level=info msg="CreateContainer within sandbox \"3c4f79cb78ade810f07564e37154d51c7d56dfeaa01b0e3fc2f53b4ea2370b2a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:29:58.639777 containerd[1674]: time="2025-04-30T03:29:58.639732620Z" level=info msg="CreateContainer within sandbox \"3c4f79cb78ade810f07564e37154d51c7d56dfeaa01b0e3fc2f53b4ea2370b2a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bc5c5e55a3cac815bca30ad716218d74b7244152a6cf8c50d253212ac2e138ae\"" Apr 30 03:29:58.641626 containerd[1674]: time="2025-04-30T03:29:58.640272025Z" level=info msg="StartContainer for \"bc5c5e55a3cac815bca30ad716218d74b7244152a6cf8c50d253212ac2e138ae\"" Apr 30 03:29:58.672786 systemd[1]: Started cri-containerd-bc5c5e55a3cac815bca30ad716218d74b7244152a6cf8c50d253212ac2e138ae.scope - libcontainer container bc5c5e55a3cac815bca30ad716218d74b7244152a6cf8c50d253212ac2e138ae. Apr 30 03:29:58.713762 containerd[1674]: time="2025-04-30T03:29:58.713719829Z" level=info msg="StartContainer for \"bc5c5e55a3cac815bca30ad716218d74b7244152a6cf8c50d253212ac2e138ae\" returns successfully" Apr 30 03:29:58.905797 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:29:58.905955 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:29:59.702242 kubelet[3215]: I0430 03:29:59.702176 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bd5vn" podStartSLOduration=2.358138335 podStartE2EDuration="22.702158291s" podCreationTimestamp="2025-04-30 03:29:37 +0000 UTC" firstStartedPulling="2025-04-30 03:29:38.239840429 +0000 UTC m=+19.815231839" lastFinishedPulling="2025-04-30 03:29:58.583860285 +0000 UTC m=+40.159251795" observedRunningTime="2025-04-30 03:29:59.701778487 +0000 UTC m=+41.277169997" watchObservedRunningTime="2025-04-30 03:29:59.702158291 +0000 UTC m=+41.277549801" Apr 30 03:30:03.301171 kubelet[3215]: I0430 03:30:03.301030 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:06.211880 kubelet[3215]: I0430 03:30:06.211460 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:06.525762 containerd[1674]: time="2025-04-30T03:30:06.524642981Z" level=info msg="StopPodSandbox for \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\"" Apr 30 03:30:06.525762 containerd[1674]: time="2025-04-30T03:30:06.525096686Z" level=info msg="StopPodSandbox for \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\"" Apr 30 03:30:06.531843 containerd[1674]: time="2025-04-30T03:30:06.531287346Z" level=info msg="StopPodSandbox for \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\"" Apr 30 03:30:06.730618 kernel: bpftool[4708]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:30:06.743625 containerd[1674]: 2025-04-30 03:30:06.660 [INFO][4656] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:30:06.743625 containerd[1674]: 2025-04-30 03:30:06.660 [INFO][4656] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" iface="eth0" netns="/var/run/netns/cni-92bfcb7e-b141-5510-d858-f2b491928dfa" Apr 30 03:30:06.743625 containerd[1674]: 2025-04-30 03:30:06.660 [INFO][4656] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" iface="eth0" netns="/var/run/netns/cni-92bfcb7e-b141-5510-d858-f2b491928dfa" Apr 30 03:30:06.743625 containerd[1674]: 2025-04-30 03:30:06.661 [INFO][4656] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" iface="eth0" netns="/var/run/netns/cni-92bfcb7e-b141-5510-d858-f2b491928dfa" Apr 30 03:30:06.743625 containerd[1674]: 2025-04-30 03:30:06.661 [INFO][4656] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:30:06.743625 containerd[1674]: 2025-04-30 03:30:06.661 [INFO][4656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:30:06.743625 containerd[1674]: 2025-04-30 03:30:06.711 [INFO][4681] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" HandleID="k8s-pod-network.2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:06.743625 containerd[1674]: 2025-04-30 03:30:06.713 [INFO][4681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:06.743625 containerd[1674]: 2025-04-30 03:30:06.714 [INFO][4681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:06.743625 containerd[1674]: 2025-04-30 03:30:06.724 [WARNING][4681] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" HandleID="k8s-pod-network.2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:06.743625 containerd[1674]: 2025-04-30 03:30:06.725 [INFO][4681] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" HandleID="k8s-pod-network.2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:06.743625 containerd[1674]: 2025-04-30 03:30:06.732 [INFO][4681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:06.743625 containerd[1674]: 2025-04-30 03:30:06.739 [INFO][4656] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:30:06.747471 containerd[1674]: time="2025-04-30T03:30:06.744416723Z" level=info msg="TearDown network for sandbox \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\" successfully" Apr 30 03:30:06.747471 containerd[1674]: time="2025-04-30T03:30:06.744460324Z" level=info msg="StopPodSandbox for \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\" returns successfully" Apr 30 03:30:06.747195 systemd[1]: run-netns-cni\x2d92bfcb7e\x2db141\x2d5510\x2dd858\x2df2b491928dfa.mount: Deactivated successfully. Apr 30 03:30:06.749630 containerd[1674]: time="2025-04-30T03:30:06.748434361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc9f7b477-z2mdz,Uid:c307cc16-3906-4e89-a216-d52cd8df720e,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:30:06.775559 containerd[1674]: 2025-04-30 03:30:06.672 [INFO][4667] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:30:06.775559 containerd[1674]: 2025-04-30 03:30:06.673 [INFO][4667] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" iface="eth0" netns="/var/run/netns/cni-094dcc4c-5921-29f2-827b-5a2310b6606e" Apr 30 03:30:06.775559 containerd[1674]: 2025-04-30 03:30:06.673 [INFO][4667] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" iface="eth0" netns="/var/run/netns/cni-094dcc4c-5921-29f2-827b-5a2310b6606e" Apr 30 03:30:06.775559 containerd[1674]: 2025-04-30 03:30:06.674 [INFO][4667] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" iface="eth0" netns="/var/run/netns/cni-094dcc4c-5921-29f2-827b-5a2310b6606e" Apr 30 03:30:06.775559 containerd[1674]: 2025-04-30 03:30:06.674 [INFO][4667] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:30:06.775559 containerd[1674]: 2025-04-30 03:30:06.674 [INFO][4667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:30:06.775559 containerd[1674]: 2025-04-30 03:30:06.753 [INFO][4687] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" HandleID="k8s-pod-network.5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:06.775559 containerd[1674]: 2025-04-30 03:30:06.754 [INFO][4687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:06.775559 containerd[1674]: 2025-04-30 03:30:06.754 [INFO][4687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:06.775559 containerd[1674]: 2025-04-30 03:30:06.765 [WARNING][4687] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" HandleID="k8s-pod-network.5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:06.775559 containerd[1674]: 2025-04-30 03:30:06.765 [INFO][4687] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" HandleID="k8s-pod-network.5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:06.775559 containerd[1674]: 2025-04-30 03:30:06.767 [INFO][4687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:06.775559 containerd[1674]: 2025-04-30 03:30:06.769 [INFO][4667] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:30:06.782227 containerd[1674]: time="2025-04-30T03:30:06.779736653Z" level=info msg="TearDown network for sandbox \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\" successfully" Apr 30 03:30:06.782227 containerd[1674]: time="2025-04-30T03:30:06.779775753Z" level=info msg="StopPodSandbox for \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\" returns successfully" Apr 30 03:30:06.784748 containerd[1674]: time="2025-04-30T03:30:06.784094494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gbccc,Uid:9774df4c-daf4-44bc-bfa3-9191c38f8346,Namespace:kube-system,Attempt:1,}" Apr 30 03:30:06.785127 systemd[1]: run-netns-cni\x2d094dcc4c\x2d5921\x2d29f2\x2d827b\x2d5a2310b6606e.mount: Deactivated successfully. Apr 30 03:30:06.819242 containerd[1674]: 2025-04-30 03:30:06.693 [INFO][4647] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:30:06.819242 containerd[1674]: 2025-04-30 03:30:06.693 [INFO][4647] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" iface="eth0" netns="/var/run/netns/cni-2dd3ba81-6953-da8c-39c3-1f94938f35a9" Apr 30 03:30:06.819242 containerd[1674]: 2025-04-30 03:30:06.693 [INFO][4647] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" iface="eth0" netns="/var/run/netns/cni-2dd3ba81-6953-da8c-39c3-1f94938f35a9" Apr 30 03:30:06.819242 containerd[1674]: 2025-04-30 03:30:06.694 [INFO][4647] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" iface="eth0" netns="/var/run/netns/cni-2dd3ba81-6953-da8c-39c3-1f94938f35a9" Apr 30 03:30:06.819242 containerd[1674]: 2025-04-30 03:30:06.694 [INFO][4647] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:30:06.819242 containerd[1674]: 2025-04-30 03:30:06.694 [INFO][4647] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:30:06.819242 containerd[1674]: 2025-04-30 03:30:06.786 [INFO][4694] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" HandleID="k8s-pod-network.486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:06.819242 containerd[1674]: 2025-04-30 03:30:06.787 [INFO][4694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:06.819242 containerd[1674]: 2025-04-30 03:30:06.788 [INFO][4694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:06.819242 containerd[1674]: 2025-04-30 03:30:06.808 [WARNING][4694] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" HandleID="k8s-pod-network.486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:06.819242 containerd[1674]: 2025-04-30 03:30:06.808 [INFO][4694] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" HandleID="k8s-pod-network.486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:06.819242 containerd[1674]: 2025-04-30 03:30:06.810 [INFO][4694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:06.819242 containerd[1674]: 2025-04-30 03:30:06.813 [INFO][4647] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:30:06.823038 containerd[1674]: time="2025-04-30T03:30:06.823003757Z" level=info msg="TearDown network for sandbox \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\" successfully" Apr 30 03:30:06.823134 containerd[1674]: time="2025-04-30T03:30:06.823117158Z" level=info msg="StopPodSandbox for \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\" returns successfully" Apr 30 03:30:06.823888 systemd[1]: run-netns-cni\x2d2dd3ba81\x2d6953\x2dda8c\x2d39c3\x2d1f94938f35a9.mount: Deactivated successfully. Apr 30 03:30:06.827450 containerd[1674]: time="2025-04-30T03:30:06.826226387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-msddw,Uid:a78b558b-dccc-4c43-976d-3e3ed712a212,Namespace:kube-system,Attempt:1,}" Apr 30 03:30:07.097087 systemd-networkd[1453]: cali2f248a6691d: Link UP Apr 30 03:30:07.098200 systemd-networkd[1453]: cali2f248a6691d: Gained carrier Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:06.866 [INFO][4713] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0 calico-apiserver-5bc9f7b477- calico-apiserver c307cc16-3906-4e89-a216-d52cd8df720e 755 0 2025-04-30 03:29:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bc9f7b477 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-e2728433b6 calico-apiserver-5bc9f7b477-z2mdz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2f248a6691d [] []}} ContainerID="d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-z2mdz" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-" Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:06.866 [INFO][4713] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-z2mdz" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:06.971 [INFO][4740] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" HandleID="k8s-pod-network.d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:06.985 [INFO][4740] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" HandleID="k8s-pod-network.d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004e4de0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-e2728433b6", "pod":"calico-apiserver-5bc9f7b477-z2mdz", "timestamp":"2025-04-30 03:30:06.971256539 +0000 UTC"}, Hostname:"ci-4081.3.3-a-e2728433b6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:06.985 [INFO][4740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:06.985 [INFO][4740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:06.986 [INFO][4740] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-e2728433b6' Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:06.991 [INFO][4740] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:07.002 [INFO][4740] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:07.010 [INFO][4740] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:07.016 [INFO][4740] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:07.021 [INFO][4740] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:07.021 [INFO][4740] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:07.024 [INFO][4740] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:07.040 [INFO][4740] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:07.052 [INFO][4740] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.129/26] block=192.168.106.128/26 handle="k8s-pod-network.d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:07.052 [INFO][4740] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.129/26] handle="k8s-pod-network.d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:07.052 [INFO][4740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:07.138364 containerd[1674]: 2025-04-30 03:30:07.052 [INFO][4740] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.129/26] IPv6=[] ContainerID="d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" HandleID="k8s-pod-network.d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:07.140454 containerd[1674]: 2025-04-30 03:30:07.064 [INFO][4713] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-z2mdz" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0", GenerateName:"calico-apiserver-5bc9f7b477-", Namespace:"calico-apiserver", SelfLink:"", UID:"c307cc16-3906-4e89-a216-d52cd8df720e", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc9f7b477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"", Pod:"calico-apiserver-5bc9f7b477-z2mdz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f248a6691d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:07.140454 containerd[1674]: 2025-04-30 03:30:07.065 [INFO][4713] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.129/32] ContainerID="d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-z2mdz" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:07.140454 containerd[1674]: 2025-04-30 03:30:07.066 [INFO][4713] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f248a6691d ContainerID="d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-z2mdz" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:07.140454 containerd[1674]: 2025-04-30 03:30:07.099 [INFO][4713] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-z2mdz" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:07.140454 containerd[1674]: 2025-04-30 03:30:07.099 [INFO][4713] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-z2mdz" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0", GenerateName:"calico-apiserver-5bc9f7b477-", Namespace:"calico-apiserver", SelfLink:"", UID:"c307cc16-3906-4e89-a216-d52cd8df720e", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc9f7b477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac", Pod:"calico-apiserver-5bc9f7b477-z2mdz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f248a6691d", MAC:"7a:49:c2:6c:bc:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:07.140454 containerd[1674]: 2025-04-30 03:30:07.134 [INFO][4713] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-z2mdz" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:07.145347 systemd-networkd[1453]: cali051c1897ab9: Link UP Apr 30 03:30:07.147238 systemd-networkd[1453]: cali051c1897ab9: Gained carrier Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:06.933 [INFO][4725] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0 coredns-7db6d8ff4d- kube-system 9774df4c-daf4-44bc-bfa3-9191c38f8346 756 0 2025-04-30 03:29:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-e2728433b6 coredns-7db6d8ff4d-gbccc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali051c1897ab9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gbccc" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-" Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:06.934 [INFO][4725] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gbccc" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.033 [INFO][4757] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" HandleID="k8s-pod-network.63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.060 [INFO][4757] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" HandleID="k8s-pod-network.63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290870), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-e2728433b6", "pod":"coredns-7db6d8ff4d-gbccc", "timestamp":"2025-04-30 03:30:07.033385219 +0000 UTC"}, Hostname:"ci-4081.3.3-a-e2728433b6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.060 [INFO][4757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.060 [INFO][4757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.061 [INFO][4757] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-e2728433b6' Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.063 [INFO][4757] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.072 [INFO][4757] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.079 [INFO][4757] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.082 [INFO][4757] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.086 [INFO][4757] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.087 [INFO][4757] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.089 [INFO][4757] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8 Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.103 [INFO][4757] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.126 [INFO][4757] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.130/26] block=192.168.106.128/26 handle="k8s-pod-network.63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.127 [INFO][4757] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.130/26] handle="k8s-pod-network.63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.127 [INFO][4757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:07.182518 containerd[1674]: 2025-04-30 03:30:07.127 [INFO][4757] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.130/26] IPv6=[] ContainerID="63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" HandleID="k8s-pod-network.63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:07.183445 containerd[1674]: 2025-04-30 03:30:07.134 [INFO][4725] cni-plugin/k8s.go 386: Populated endpoint ContainerID="63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gbccc" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9774df4c-daf4-44bc-bfa3-9191c38f8346", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"", Pod:"coredns-7db6d8ff4d-gbccc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali051c1897ab9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:07.183445 containerd[1674]: 2025-04-30 03:30:07.134 [INFO][4725] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.130/32] ContainerID="63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gbccc" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:07.183445 containerd[1674]: 2025-04-30 03:30:07.135 [INFO][4725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali051c1897ab9 ContainerID="63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gbccc" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:07.183445 containerd[1674]: 2025-04-30 03:30:07.147 [INFO][4725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gbccc" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:07.183445 containerd[1674]: 2025-04-30 03:30:07.150 [INFO][4725] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gbccc" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9774df4c-daf4-44bc-bfa3-9191c38f8346", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8", Pod:"coredns-7db6d8ff4d-gbccc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali051c1897ab9", MAC:"f2:2a:06:15:d2:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:07.183445 containerd[1674]: 2025-04-30 03:30:07.174 [INFO][4725] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gbccc" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:07.251145 containerd[1674]: time="2025-04-30T03:30:07.251007349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:07.251145 containerd[1674]: time="2025-04-30T03:30:07.251080950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:07.251145 containerd[1674]: time="2025-04-30T03:30:07.251104250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:07.251651 containerd[1674]: time="2025-04-30T03:30:07.251255651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:07.279666 containerd[1674]: time="2025-04-30T03:30:07.279561315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:07.280008 containerd[1674]: time="2025-04-30T03:30:07.279868218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:07.286145 containerd[1674]: time="2025-04-30T03:30:07.285718673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:07.287609 containerd[1674]: time="2025-04-30T03:30:07.287298587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:07.296168 systemd[1]: Started cri-containerd-d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac.scope - libcontainer container d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac. Apr 30 03:30:07.316656 systemd-networkd[1453]: cali0feb25723c7: Link UP Apr 30 03:30:07.316898 systemd-networkd[1453]: cali0feb25723c7: Gained carrier Apr 30 03:30:07.338793 systemd[1]: Started cri-containerd-63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8.scope - libcontainer container 63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8. Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.074 [INFO][4758] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0 coredns-7db6d8ff4d- kube-system a78b558b-dccc-4c43-976d-3e3ed712a212 757 0 2025-04-30 03:29:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-e2728433b6 coredns-7db6d8ff4d-msddw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0feb25723c7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" Namespace="kube-system" Pod="coredns-7db6d8ff4d-msddw" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-" Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.074 [INFO][4758] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" Namespace="kube-system" Pod="coredns-7db6d8ff4d-msddw" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.204 [INFO][4779] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" HandleID="k8s-pod-network.bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.218 [INFO][4779] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" HandleID="k8s-pod-network.bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e4450), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-e2728433b6", "pod":"coredns-7db6d8ff4d-msddw", "timestamp":"2025-04-30 03:30:07.204009411 +0000 UTC"}, Hostname:"ci-4081.3.3-a-e2728433b6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.218 [INFO][4779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.218 [INFO][4779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.218 [INFO][4779] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-e2728433b6' Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.225 [INFO][4779] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.242 [INFO][4779] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.251 [INFO][4779] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.253 [INFO][4779] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.264 [INFO][4779] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.264 [INFO][4779] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.271 [INFO][4779] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487 Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.282 [INFO][4779] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.301 [INFO][4779] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.131/26] block=192.168.106.128/26 handle="k8s-pod-network.bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.301 [INFO][4779] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.131/26] handle="k8s-pod-network.bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.301 [INFO][4779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:07.353862 containerd[1674]: 2025-04-30 03:30:07.301 [INFO][4779] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.131/26] IPv6=[] ContainerID="bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" HandleID="k8s-pod-network.bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:07.354830 containerd[1674]: 2025-04-30 03:30:07.306 [INFO][4758] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" Namespace="kube-system" Pod="coredns-7db6d8ff4d-msddw" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a78b558b-dccc-4c43-976d-3e3ed712a212", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"", Pod:"coredns-7db6d8ff4d-msddw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0feb25723c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:07.354830 containerd[1674]: 2025-04-30 03:30:07.307 [INFO][4758] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.131/32] ContainerID="bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" Namespace="kube-system" Pod="coredns-7db6d8ff4d-msddw" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:07.354830 containerd[1674]: 2025-04-30 03:30:07.307 [INFO][4758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0feb25723c7 ContainerID="bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" Namespace="kube-system" Pod="coredns-7db6d8ff4d-msddw" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:07.354830 containerd[1674]: 2025-04-30 03:30:07.317 [INFO][4758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" Namespace="kube-system" Pod="coredns-7db6d8ff4d-msddw" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:07.354830 containerd[1674]: 2025-04-30 03:30:07.319 [INFO][4758] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" Namespace="kube-system" Pod="coredns-7db6d8ff4d-msddw" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a78b558b-dccc-4c43-976d-3e3ed712a212", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487", Pod:"coredns-7db6d8ff4d-msddw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0feb25723c7", MAC:"9e:9f:c4:f7:8f:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:07.354830 containerd[1674]: 2025-04-30 03:30:07.349 [INFO][4758] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487" Namespace="kube-system" Pod="coredns-7db6d8ff4d-msddw" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:07.407401 containerd[1674]: time="2025-04-30T03:30:07.406745202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:07.407401 containerd[1674]: time="2025-04-30T03:30:07.407199506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:07.407401 containerd[1674]: time="2025-04-30T03:30:07.407227906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:07.407401 containerd[1674]: time="2025-04-30T03:30:07.407331507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:07.433080 containerd[1674]: time="2025-04-30T03:30:07.432926846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gbccc,Uid:9774df4c-daf4-44bc-bfa3-9191c38f8346,Namespace:kube-system,Attempt:1,} returns sandbox id \"63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8\"" Apr 30 03:30:07.452826 systemd[1]: Started cri-containerd-bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487.scope - libcontainer container bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487. Apr 30 03:30:07.462236 containerd[1674]: time="2025-04-30T03:30:07.462140118Z" level=info msg="CreateContainer within sandbox \"63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:30:07.521318 containerd[1674]: time="2025-04-30T03:30:07.521255770Z" level=info msg="CreateContainer within sandbox \"63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84f9a2b10569bc6d5b214a860f5b1d683c968a968b301ef1649891d0a7fbfc4b\"" Apr 30 03:30:07.523692 containerd[1674]: time="2025-04-30T03:30:07.523635792Z" level=info msg="StartContainer for \"84f9a2b10569bc6d5b214a860f5b1d683c968a968b301ef1649891d0a7fbfc4b\"" Apr 30 03:30:07.527529 containerd[1674]: time="2025-04-30T03:30:07.527485428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc9f7b477-z2mdz,Uid:c307cc16-3906-4e89-a216-d52cd8df720e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac\"" Apr 30 03:30:07.534617 containerd[1674]: time="2025-04-30T03:30:07.533702886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:30:07.565621 containerd[1674]: time="2025-04-30T03:30:07.563056260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-msddw,Uid:a78b558b-dccc-4c43-976d-3e3ed712a212,Namespace:kube-system,Attempt:1,} returns sandbox id \"bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487\"" Apr 30 03:30:07.569618 containerd[1674]: time="2025-04-30T03:30:07.568801313Z" level=info msg="CreateContainer within sandbox \"bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:30:07.591001 systemd[1]: Started cri-containerd-84f9a2b10569bc6d5b214a860f5b1d683c968a968b301ef1649891d0a7fbfc4b.scope - libcontainer container 84f9a2b10569bc6d5b214a860f5b1d683c968a968b301ef1649891d0a7fbfc4b. Apr 30 03:30:07.634742 containerd[1674]: time="2025-04-30T03:30:07.634520726Z" level=info msg="CreateContainer within sandbox \"bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9d089ea6cd0a6abef75ed65528e05da3126331414cd3c2229be292e0b39882b\"" Apr 30 03:30:07.635617 containerd[1674]: time="2025-04-30T03:30:07.635322334Z" level=info msg="StartContainer for \"b9d089ea6cd0a6abef75ed65528e05da3126331414cd3c2229be292e0b39882b\"" Apr 30 03:30:07.658603 containerd[1674]: time="2025-04-30T03:30:07.658532550Z" level=info msg="StartContainer for \"84f9a2b10569bc6d5b214a860f5b1d683c968a968b301ef1649891d0a7fbfc4b\" returns successfully" Apr 30 03:30:07.693801 systemd[1]: Started cri-containerd-b9d089ea6cd0a6abef75ed65528e05da3126331414cd3c2229be292e0b39882b.scope - libcontainer container b9d089ea6cd0a6abef75ed65528e05da3126331414cd3c2229be292e0b39882b. Apr 30 03:30:07.776936 containerd[1674]: time="2025-04-30T03:30:07.776890654Z" level=info msg="StartContainer for \"b9d089ea6cd0a6abef75ed65528e05da3126331414cd3c2229be292e0b39882b\" returns successfully" Apr 30 03:30:08.079416 systemd-networkd[1453]: vxlan.calico: Link UP Apr 30 03:30:08.079427 systemd-networkd[1453]: vxlan.calico: Gained carrier Apr 30 03:30:08.372766 systemd-networkd[1453]: cali051c1897ab9: Gained IPv6LL Apr 30 03:30:08.500779 systemd-networkd[1453]: cali0feb25723c7: Gained IPv6LL Apr 30 03:30:08.528547 containerd[1674]: time="2025-04-30T03:30:08.528495365Z" level=info msg="StopPodSandbox for \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\"" Apr 30 03:30:08.532444 containerd[1674]: time="2025-04-30T03:30:08.529038070Z" level=info msg="StopPodSandbox for \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\"" Apr 30 03:30:08.533625 containerd[1674]: time="2025-04-30T03:30:08.533000307Z" level=info msg="StopPodSandbox for \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\"" Apr 30 03:30:08.663319 kubelet[3215]: I0430 03:30:08.663009 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gbccc" podStartSLOduration=37.66298412 podStartE2EDuration="37.66298412s" podCreationTimestamp="2025-04-30 03:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:30:07.808807952 +0000 UTC m=+49.384199462" watchObservedRunningTime="2025-04-30 03:30:08.66298412 +0000 UTC m=+50.238375630" Apr 30 03:30:08.826101 kubelet[3215]: I0430 03:30:08.824625 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-msddw" podStartSLOduration=37.824573727 podStartE2EDuration="37.824573727s" podCreationTimestamp="2025-04-30 03:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:30:08.824096723 +0000 UTC m=+50.399488233" watchObservedRunningTime="2025-04-30 03:30:08.824573727 +0000 UTC m=+50.399965137" Apr 30 03:30:08.870636 containerd[1674]: 2025-04-30 03:30:08.657 [INFO][5154] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:30:08.870636 containerd[1674]: 2025-04-30 03:30:08.665 [INFO][5154] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" iface="eth0" netns="/var/run/netns/cni-fa2892a1-b3ef-33f3-6dbb-14616792ad23" Apr 30 03:30:08.870636 containerd[1674]: 2025-04-30 03:30:08.666 [INFO][5154] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" iface="eth0" netns="/var/run/netns/cni-fa2892a1-b3ef-33f3-6dbb-14616792ad23" Apr 30 03:30:08.870636 containerd[1674]: 2025-04-30 03:30:08.667 [INFO][5154] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" iface="eth0" netns="/var/run/netns/cni-fa2892a1-b3ef-33f3-6dbb-14616792ad23" Apr 30 03:30:08.870636 containerd[1674]: 2025-04-30 03:30:08.667 [INFO][5154] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:30:08.870636 containerd[1674]: 2025-04-30 03:30:08.667 [INFO][5154] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:30:08.870636 containerd[1674]: 2025-04-30 03:30:08.797 [INFO][5175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" HandleID="k8s-pod-network.491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:08.870636 containerd[1674]: 2025-04-30 03:30:08.798 [INFO][5175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:08.870636 containerd[1674]: 2025-04-30 03:30:08.798 [INFO][5175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:08.870636 containerd[1674]: 2025-04-30 03:30:08.839 [WARNING][5175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" HandleID="k8s-pod-network.491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:08.870636 containerd[1674]: 2025-04-30 03:30:08.839 [INFO][5175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" HandleID="k8s-pod-network.491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:08.870636 containerd[1674]: 2025-04-30 03:30:08.847 [INFO][5175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:08.870636 containerd[1674]: 2025-04-30 03:30:08.855 [INFO][5154] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:30:08.870636 containerd[1674]: time="2025-04-30T03:30:08.859120849Z" level=info msg="TearDown network for sandbox \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\" successfully" Apr 30 03:30:08.870636 containerd[1674]: time="2025-04-30T03:30:08.859159150Z" level=info msg="StopPodSandbox for \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\" returns successfully" Apr 30 03:30:08.870636 containerd[1674]: time="2025-04-30T03:30:08.863399089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc9f7b477-ksgxv,Uid:a556b718-e37a-4703-8148-82b2fa7a6e46,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:30:08.870423 systemd[1]: run-netns-cni\x2dfa2892a1\x2db3ef\x2d33f3\x2d6dbb\x2d14616792ad23.mount: Deactivated successfully. Apr 30 03:30:08.888367 containerd[1674]: 2025-04-30 03:30:08.732 [INFO][5162] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:30:08.888367 containerd[1674]: 2025-04-30 03:30:08.732 [INFO][5162] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" iface="eth0" netns="/var/run/netns/cni-d4e65bdb-3be8-3d83-90f1-9e760c901669" Apr 30 03:30:08.888367 containerd[1674]: 2025-04-30 03:30:08.737 [INFO][5162] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" iface="eth0" netns="/var/run/netns/cni-d4e65bdb-3be8-3d83-90f1-9e760c901669" Apr 30 03:30:08.888367 containerd[1674]: 2025-04-30 03:30:08.738 [INFO][5162] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" iface="eth0" netns="/var/run/netns/cni-d4e65bdb-3be8-3d83-90f1-9e760c901669" Apr 30 03:30:08.888367 containerd[1674]: 2025-04-30 03:30:08.739 [INFO][5162] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:30:08.888367 containerd[1674]: 2025-04-30 03:30:08.739 [INFO][5162] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:30:08.888367 containerd[1674]: 2025-04-30 03:30:08.849 [INFO][5185] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" HandleID="k8s-pod-network.74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Workload="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:08.888367 containerd[1674]: 2025-04-30 03:30:08.852 [INFO][5185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:08.888367 containerd[1674]: 2025-04-30 03:30:08.852 [INFO][5185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:08.888367 containerd[1674]: 2025-04-30 03:30:08.876 [WARNING][5185] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" HandleID="k8s-pod-network.74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Workload="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:08.888367 containerd[1674]: 2025-04-30 03:30:08.876 [INFO][5185] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" HandleID="k8s-pod-network.74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Workload="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:08.888367 containerd[1674]: 2025-04-30 03:30:08.880 [INFO][5185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:08.888367 containerd[1674]: 2025-04-30 03:30:08.882 [INFO][5162] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:30:08.890171 containerd[1674]: time="2025-04-30T03:30:08.890032138Z" level=info msg="TearDown network for sandbox \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\" successfully" Apr 30 03:30:08.890676 containerd[1674]: time="2025-04-30T03:30:08.890375641Z" level=info msg="StopPodSandbox for \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\" returns successfully" Apr 30 03:30:08.895611 containerd[1674]: time="2025-04-30T03:30:08.895248986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kz4tb,Uid:dcdc5f6d-cefa-4e15-8498-441a243c70ee,Namespace:calico-system,Attempt:1,}" Apr 30 03:30:08.898995 systemd[1]: run-netns-cni\x2dd4e65bdb\x2d3be8\x2d3d83\x2d90f1\x2d9e760c901669.mount: Deactivated successfully. Apr 30 03:30:08.969140 containerd[1674]: 2025-04-30 03:30:08.776 [INFO][5158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:30:08.969140 containerd[1674]: 2025-04-30 03:30:08.776 [INFO][5158] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" iface="eth0" netns="/var/run/netns/cni-cbd7b4aa-0758-4866-dd7a-30300cc91b75" Apr 30 03:30:08.969140 containerd[1674]: 2025-04-30 03:30:08.777 [INFO][5158] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" iface="eth0" netns="/var/run/netns/cni-cbd7b4aa-0758-4866-dd7a-30300cc91b75" Apr 30 03:30:08.969140 containerd[1674]: 2025-04-30 03:30:08.779 [INFO][5158] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" iface="eth0" netns="/var/run/netns/cni-cbd7b4aa-0758-4866-dd7a-30300cc91b75" Apr 30 03:30:08.969140 containerd[1674]: 2025-04-30 03:30:08.779 [INFO][5158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:30:08.969140 containerd[1674]: 2025-04-30 03:30:08.779 [INFO][5158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:30:08.969140 containerd[1674]: 2025-04-30 03:30:08.912 [INFO][5191] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" HandleID="k8s-pod-network.a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:08.969140 containerd[1674]: 2025-04-30 03:30:08.912 [INFO][5191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:08.969140 containerd[1674]: 2025-04-30 03:30:08.912 [INFO][5191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:08.969140 containerd[1674]: 2025-04-30 03:30:08.932 [WARNING][5191] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" HandleID="k8s-pod-network.a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:08.969140 containerd[1674]: 2025-04-30 03:30:08.932 [INFO][5191] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" HandleID="k8s-pod-network.a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:08.969140 containerd[1674]: 2025-04-30 03:30:08.935 [INFO][5191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:08.969140 containerd[1674]: 2025-04-30 03:30:08.941 [INFO][5158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:30:08.970886 containerd[1674]: time="2025-04-30T03:30:08.970850892Z" level=info msg="TearDown network for sandbox \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\" successfully" Apr 30 03:30:08.972911 containerd[1674]: time="2025-04-30T03:30:08.972880911Z" level=info msg="StopPodSandbox for \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\" returns successfully" Apr 30 03:30:08.978466 containerd[1674]: time="2025-04-30T03:30:08.978419862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7478c6c9-ncjxs,Uid:a95e4c6a-c6b4-4ac1-a191-4c74878eb86f,Namespace:calico-system,Attempt:1,}" Apr 30 03:30:09.142627 systemd-networkd[1453]: vxlan.calico: Gained IPv6LL Apr 30 03:30:09.145799 systemd-networkd[1453]: cali2f248a6691d: Gained IPv6LL Apr 30 03:30:09.170941 systemd-networkd[1453]: cali1caa0b4316b: Link UP Apr 30 03:30:09.172210 systemd-networkd[1453]: cali1caa0b4316b: Gained carrier Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.019 [INFO][5206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0 calico-apiserver-5bc9f7b477- calico-apiserver a556b718-e37a-4703-8148-82b2fa7a6e46 784 0 2025-04-30 03:29:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bc9f7b477 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-e2728433b6 calico-apiserver-5bc9f7b477-ksgxv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1caa0b4316b [] []}} ContainerID="6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-ksgxv" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-" Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.019 [INFO][5206] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-ksgxv" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.082 [INFO][5240] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" HandleID="k8s-pod-network.6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.095 [INFO][5240] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" HandleID="k8s-pod-network.6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a7c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-e2728433b6", "pod":"calico-apiserver-5bc9f7b477-ksgxv", "timestamp":"2025-04-30 03:30:09.082759836 +0000 UTC"}, Hostname:"ci-4081.3.3-a-e2728433b6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.096 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.096 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.096 [INFO][5240] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-e2728433b6' Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.099 [INFO][5240] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.106 [INFO][5240] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.113 [INFO][5240] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.117 [INFO][5240] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.122 [INFO][5240] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.122 [INFO][5240] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.125 [INFO][5240] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.134 [INFO][5240] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.156 [INFO][5240] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.132/26] block=192.168.106.128/26 handle="k8s-pod-network.6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.156 [INFO][5240] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.132/26] handle="k8s-pod-network.6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.156 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:09.204161 containerd[1674]: 2025-04-30 03:30:09.156 [INFO][5240] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.132/26] IPv6=[] ContainerID="6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" HandleID="k8s-pod-network.6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:09.205326 containerd[1674]: 2025-04-30 03:30:09.164 [INFO][5206] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-ksgxv" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0", GenerateName:"calico-apiserver-5bc9f7b477-", Namespace:"calico-apiserver", SelfLink:"", UID:"a556b718-e37a-4703-8148-82b2fa7a6e46", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc9f7b477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"", Pod:"calico-apiserver-5bc9f7b477-ksgxv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1caa0b4316b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:09.205326 containerd[1674]: 2025-04-30 03:30:09.164 [INFO][5206] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.132/32] ContainerID="6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-ksgxv" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:09.205326 containerd[1674]: 2025-04-30 03:30:09.164 [INFO][5206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1caa0b4316b ContainerID="6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-ksgxv" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:09.205326 containerd[1674]: 2025-04-30 03:30:09.172 [INFO][5206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-ksgxv" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:09.205326 containerd[1674]: 2025-04-30 03:30:09.173 [INFO][5206] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-ksgxv" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0", GenerateName:"calico-apiserver-5bc9f7b477-", Namespace:"calico-apiserver", SelfLink:"", UID:"a556b718-e37a-4703-8148-82b2fa7a6e46", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc9f7b477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a", Pod:"calico-apiserver-5bc9f7b477-ksgxv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1caa0b4316b", MAC:"32:1f:5c:09:13:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:09.205326 containerd[1674]: 2025-04-30 03:30:09.198 [INFO][5206] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a" Namespace="calico-apiserver" Pod="calico-apiserver-5bc9f7b477-ksgxv" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:09.279629 containerd[1674]: time="2025-04-30T03:30:09.273117711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:09.279629 containerd[1674]: time="2025-04-30T03:30:09.273228412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:09.279629 containerd[1674]: time="2025-04-30T03:30:09.273249412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:09.279629 containerd[1674]: time="2025-04-30T03:30:09.273351613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:09.283492 systemd-networkd[1453]: calief4a0becc64: Link UP Apr 30 03:30:09.284927 systemd-networkd[1453]: calief4a0becc64: Gained carrier Apr 30 03:30:09.325106 systemd[1]: Started cri-containerd-6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a.scope - libcontainer container 6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a. Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.041 [INFO][5210] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0 csi-node-driver- calico-system dcdc5f6d-cefa-4e15-8498-441a243c70ee 785 0 2025-04-30 03:29:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-a-e2728433b6 csi-node-driver-kz4tb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calief4a0becc64 [] []}} ContainerID="48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" Namespace="calico-system" Pod="csi-node-driver-kz4tb" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-" Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.041 [INFO][5210] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" Namespace="calico-system" Pod="csi-node-driver-kz4tb" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.149 [INFO][5247] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" HandleID="k8s-pod-network.48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" Workload="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.167 [INFO][5247] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" HandleID="k8s-pod-network.48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" Workload="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000219270), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-e2728433b6", "pod":"csi-node-driver-kz4tb", "timestamp":"2025-04-30 03:30:09.149277256 +0000 UTC"}, Hostname:"ci-4081.3.3-a-e2728433b6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.168 [INFO][5247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.168 [INFO][5247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.169 [INFO][5247] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-e2728433b6' Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.172 [INFO][5247] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.189 [INFO][5247] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.201 [INFO][5247] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.206 [INFO][5247] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.210 [INFO][5247] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.210 [INFO][5247] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.212 [INFO][5247] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942 Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.236 [INFO][5247] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.264 [INFO][5247] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.133/26] block=192.168.106.128/26 handle="k8s-pod-network.48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.265 [INFO][5247] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.133/26] handle="k8s-pod-network.48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.265 [INFO][5247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:09.336705 containerd[1674]: 2025-04-30 03:30:09.266 [INFO][5247] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.133/26] IPv6=[] ContainerID="48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" HandleID="k8s-pod-network.48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" Workload="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:09.337750 containerd[1674]: 2025-04-30 03:30:09.274 [INFO][5210] cni-plugin/k8s.go 386: Populated endpoint ContainerID="48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" Namespace="calico-system" Pod="csi-node-driver-kz4tb" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dcdc5f6d-cefa-4e15-8498-441a243c70ee", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"", Pod:"csi-node-driver-kz4tb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calief4a0becc64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:09.337750 containerd[1674]: 2025-04-30 03:30:09.274 [INFO][5210] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.133/32] ContainerID="48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" Namespace="calico-system" Pod="csi-node-driver-kz4tb" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:09.337750 containerd[1674]: 2025-04-30 03:30:09.275 [INFO][5210] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief4a0becc64 ContainerID="48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" Namespace="calico-system" Pod="csi-node-driver-kz4tb" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:09.337750 containerd[1674]: 2025-04-30 03:30:09.284 [INFO][5210] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" Namespace="calico-system" Pod="csi-node-driver-kz4tb" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:09.337750 containerd[1674]: 2025-04-30 03:30:09.285 [INFO][5210] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" Namespace="calico-system" Pod="csi-node-driver-kz4tb" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dcdc5f6d-cefa-4e15-8498-441a243c70ee", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942", Pod:"csi-node-driver-kz4tb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calief4a0becc64", MAC:"72:d9:ec:ad:04:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:09.337750 containerd[1674]: 2025-04-30 03:30:09.324 [INFO][5210] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942" Namespace="calico-system" Pod="csi-node-driver-kz4tb" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:09.365877 systemd-networkd[1453]: calidaecf4f284b: Link UP Apr 30 03:30:09.366778 systemd-networkd[1453]: calidaecf4f284b: Gained carrier Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.138 [INFO][5231] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0 calico-kube-controllers-f7478c6c9- calico-system a95e4c6a-c6b4-4ac1-a191-4c74878eb86f 787 0 2025-04-30 03:29:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f7478c6c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-a-e2728433b6 calico-kube-controllers-f7478c6c9-ncjxs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidaecf4f284b [] []}} ContainerID="8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" Namespace="calico-system" Pod="calico-kube-controllers-f7478c6c9-ncjxs" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-" Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.138 [INFO][5231] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" Namespace="calico-system" Pod="calico-kube-controllers-f7478c6c9-ncjxs" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.239 [INFO][5257] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" HandleID="k8s-pod-network.8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.273 [INFO][5257] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" HandleID="k8s-pod-network.8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000396790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-e2728433b6", "pod":"calico-kube-controllers-f7478c6c9-ncjxs", "timestamp":"2025-04-30 03:30:09.239577998 +0000 UTC"}, Hostname:"ci-4081.3.3-a-e2728433b6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.275 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.276 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.276 [INFO][5257] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-e2728433b6' Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.281 [INFO][5257] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.302 [INFO][5257] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.319 [INFO][5257] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.326 [INFO][5257] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.330 [INFO][5257] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.330 [INFO][5257] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.332 [INFO][5257] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.344 [INFO][5257] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.360 [INFO][5257] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.134/26] block=192.168.106.128/26 handle="k8s-pod-network.8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.360 [INFO][5257] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.134/26] handle="k8s-pod-network.8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" host="ci-4081.3.3-a-e2728433b6" Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.360 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:09.397623 containerd[1674]: 2025-04-30 03:30:09.360 [INFO][5257] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.134/26] IPv6=[] ContainerID="8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" HandleID="k8s-pod-network.8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:09.398645 containerd[1674]: 2025-04-30 03:30:09.362 [INFO][5231] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" Namespace="calico-system" Pod="calico-kube-controllers-f7478c6c9-ncjxs" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0", GenerateName:"calico-kube-controllers-f7478c6c9-", Namespace:"calico-system", SelfLink:"", UID:"a95e4c6a-c6b4-4ac1-a191-4c74878eb86f", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7478c6c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"", Pod:"calico-kube-controllers-f7478c6c9-ncjxs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidaecf4f284b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:09.398645 containerd[1674]: 2025-04-30 03:30:09.362 [INFO][5231] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.134/32] ContainerID="8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" Namespace="calico-system" Pod="calico-kube-controllers-f7478c6c9-ncjxs" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:09.398645 containerd[1674]: 2025-04-30 03:30:09.363 [INFO][5231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidaecf4f284b ContainerID="8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" Namespace="calico-system" Pod="calico-kube-controllers-f7478c6c9-ncjxs" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:09.398645 containerd[1674]: 2025-04-30 03:30:09.366 [INFO][5231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" Namespace="calico-system" Pod="calico-kube-controllers-f7478c6c9-ncjxs" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:09.398645 containerd[1674]: 2025-04-30 03:30:09.367 [INFO][5231] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" Namespace="calico-system" Pod="calico-kube-controllers-f7478c6c9-ncjxs" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0", GenerateName:"calico-kube-controllers-f7478c6c9-", Namespace:"calico-system", SelfLink:"", UID:"a95e4c6a-c6b4-4ac1-a191-4c74878eb86f", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7478c6c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f", Pod:"calico-kube-controllers-f7478c6c9-ncjxs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidaecf4f284b", MAC:"7a:75:5c:8c:4e:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:09.398645 containerd[1674]: 2025-04-30 03:30:09.393 [INFO][5231] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f" Namespace="calico-system" Pod="calico-kube-controllers-f7478c6c9-ncjxs" WorkloadEndpoint="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:09.439286 containerd[1674]: time="2025-04-30T03:30:09.438316452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:09.439286 containerd[1674]: time="2025-04-30T03:30:09.438381653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:09.439286 containerd[1674]: time="2025-04-30T03:30:09.438404653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:09.439286 containerd[1674]: time="2025-04-30T03:30:09.438514154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:09.458185 containerd[1674]: time="2025-04-30T03:30:09.457987236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:09.459297 containerd[1674]: time="2025-04-30T03:30:09.458194038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:09.459297 containerd[1674]: time="2025-04-30T03:30:09.458213838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:09.459440 containerd[1674]: time="2025-04-30T03:30:09.459091446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:09.499846 systemd[1]: Started cri-containerd-8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f.scope - libcontainer container 8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f. Apr 30 03:30:09.513836 systemd[1]: Started cri-containerd-48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942.scope - libcontainer container 48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942. Apr 30 03:30:09.592695 containerd[1674]: time="2025-04-30T03:30:09.591813384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc9f7b477-ksgxv,Uid:a556b718-e37a-4703-8148-82b2fa7a6e46,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a\"" Apr 30 03:30:09.597209 containerd[1674]: time="2025-04-30T03:30:09.597169134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kz4tb,Uid:dcdc5f6d-cefa-4e15-8498-441a243c70ee,Namespace:calico-system,Attempt:1,} returns sandbox id \"48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942\"" Apr 30 03:30:09.628094 containerd[1674]: time="2025-04-30T03:30:09.628056122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7478c6c9-ncjxs,Uid:a95e4c6a-c6b4-4ac1-a191-4c74878eb86f,Namespace:calico-system,Attempt:1,} returns sandbox id \"8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f\"" Apr 30 03:30:09.873017 systemd[1]: run-netns-cni\x2dcbd7b4aa\x2d0758\x2d4866\x2ddd7a\x2d30300cc91b75.mount: Deactivated successfully. Apr 30 03:30:10.421993 systemd-networkd[1453]: calief4a0becc64: Gained IPv6LL Apr 30 03:30:10.634551 containerd[1674]: time="2025-04-30T03:30:10.634495910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:10.637041 containerd[1674]: time="2025-04-30T03:30:10.636972633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:30:10.641462 containerd[1674]: time="2025-04-30T03:30:10.641403175Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:10.646101 containerd[1674]: time="2025-04-30T03:30:10.646039218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:10.646831 containerd[1674]: time="2025-04-30T03:30:10.646679124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.112805337s" Apr 30 03:30:10.646831 containerd[1674]: time="2025-04-30T03:30:10.646720424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:30:10.648145 containerd[1674]: time="2025-04-30T03:30:10.647884135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:30:10.649703 containerd[1674]: time="2025-04-30T03:30:10.649612151Z" level=info msg="CreateContainer within sandbox \"d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:30:10.706674 containerd[1674]: time="2025-04-30T03:30:10.706528982Z" level=info msg="CreateContainer within sandbox \"d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3568e3f6130049c963097c3838fd77b45c0f893a60846b34a4ed676984e315bb\"" Apr 30 03:30:10.708688 containerd[1674]: time="2025-04-30T03:30:10.707495391Z" level=info msg="StartContainer for \"3568e3f6130049c963097c3838fd77b45c0f893a60846b34a4ed676984e315bb\"" Apr 30 03:30:10.743787 systemd[1]: Started cri-containerd-3568e3f6130049c963097c3838fd77b45c0f893a60846b34a4ed676984e315bb.scope - libcontainer container 3568e3f6130049c963097c3838fd77b45c0f893a60846b34a4ed676984e315bb. Apr 30 03:30:10.792098 containerd[1674]: time="2025-04-30T03:30:10.791967579Z" level=info msg="StartContainer for \"3568e3f6130049c963097c3838fd77b45c0f893a60846b34a4ed676984e315bb\" returns successfully" Apr 30 03:30:10.868757 systemd-networkd[1453]: cali1caa0b4316b: Gained IPv6LL Apr 30 03:30:10.980885 containerd[1674]: time="2025-04-30T03:30:10.980736440Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:10.983927 containerd[1674]: time="2025-04-30T03:30:10.983874269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:30:10.985977 containerd[1674]: time="2025-04-30T03:30:10.985917188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 337.999053ms" Apr 30 03:30:10.986082 containerd[1674]: time="2025-04-30T03:30:10.985980589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:30:10.987142 containerd[1674]: time="2025-04-30T03:30:10.987111199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:30:10.991029 containerd[1674]: time="2025-04-30T03:30:10.990994436Z" level=info msg="CreateContainer within sandbox \"6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:30:11.035616 containerd[1674]: time="2025-04-30T03:30:11.034984446Z" level=info msg="CreateContainer within sandbox \"6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"73cc71a6694c036ce303dada241051f0efaacda7e4aa8f31269433ac53e17d92\"" Apr 30 03:30:11.036221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733245263.mount: Deactivated successfully. Apr 30 03:30:11.041692 containerd[1674]: time="2025-04-30T03:30:11.040917501Z" level=info msg="StartContainer for \"73cc71a6694c036ce303dada241051f0efaacda7e4aa8f31269433ac53e17d92\"" Apr 30 03:30:11.094902 systemd[1]: Started cri-containerd-73cc71a6694c036ce303dada241051f0efaacda7e4aa8f31269433ac53e17d92.scope - libcontainer container 73cc71a6694c036ce303dada241051f0efaacda7e4aa8f31269433ac53e17d92. Apr 30 03:30:11.151432 containerd[1674]: time="2025-04-30T03:30:11.151388732Z" level=info msg="StartContainer for \"73cc71a6694c036ce303dada241051f0efaacda7e4aa8f31269433ac53e17d92\" returns successfully" Apr 30 03:30:11.380788 systemd-networkd[1453]: calidaecf4f284b: Gained IPv6LL Apr 30 03:30:11.773285 kubelet[3215]: I0430 03:30:11.772998 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bc9f7b477-ksgxv" podStartSLOduration=33.380751143 podStartE2EDuration="34.77296403s" podCreationTimestamp="2025-04-30 03:29:37 +0000 UTC" firstStartedPulling="2025-04-30 03:30:09.594714711 +0000 UTC m=+51.170106121" lastFinishedPulling="2025-04-30 03:30:10.986927498 +0000 UTC m=+52.562319008" observedRunningTime="2025-04-30 03:30:11.771457516 +0000 UTC m=+53.346848926" watchObservedRunningTime="2025-04-30 03:30:11.77296403 +0000 UTC m=+53.348355440" Apr 30 03:30:12.568751 containerd[1674]: time="2025-04-30T03:30:12.568697952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:12.570520 containerd[1674]: time="2025-04-30T03:30:12.570463669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:30:12.574385 containerd[1674]: time="2025-04-30T03:30:12.574330105Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:12.578657 containerd[1674]: time="2025-04-30T03:30:12.578602745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:12.579326 containerd[1674]: time="2025-04-30T03:30:12.579170550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.59201815s" Apr 30 03:30:12.579326 containerd[1674]: time="2025-04-30T03:30:12.579208250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:30:12.581158 containerd[1674]: time="2025-04-30T03:30:12.580762265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:30:12.582203 containerd[1674]: time="2025-04-30T03:30:12.582003176Z" level=info msg="CreateContainer within sandbox \"48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:30:12.614283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2363358778.mount: Deactivated successfully. Apr 30 03:30:12.621859 containerd[1674]: time="2025-04-30T03:30:12.621818248Z" level=info msg="CreateContainer within sandbox \"48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9503e7e4700e689637bd170f0c182fe568889b028c73767232f2c2fff7106e8e\"" Apr 30 03:30:12.622536 containerd[1674]: time="2025-04-30T03:30:12.622507954Z" level=info msg="StartContainer for \"9503e7e4700e689637bd170f0c182fe568889b028c73767232f2c2fff7106e8e\"" Apr 30 03:30:12.656780 systemd[1]: Started cri-containerd-9503e7e4700e689637bd170f0c182fe568889b028c73767232f2c2fff7106e8e.scope - libcontainer container 9503e7e4700e689637bd170f0c182fe568889b028c73767232f2c2fff7106e8e. Apr 30 03:30:12.685948 containerd[1674]: time="2025-04-30T03:30:12.685906346Z" level=info msg="StartContainer for \"9503e7e4700e689637bd170f0c182fe568889b028c73767232f2c2fff7106e8e\" returns successfully" Apr 30 03:30:12.759011 kubelet[3215]: I0430 03:30:12.758977 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:12.759572 kubelet[3215]: I0430 03:30:12.759547 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:14.701975 containerd[1674]: time="2025-04-30T03:30:14.701922151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:14.704631 containerd[1674]: time="2025-04-30T03:30:14.704480775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:30:14.707740 containerd[1674]: time="2025-04-30T03:30:14.707681605Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:14.712456 containerd[1674]: time="2025-04-30T03:30:14.712386249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:14.713217 containerd[1674]: time="2025-04-30T03:30:14.713052355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.13225239s" Apr 30 03:30:14.713217 containerd[1674]: time="2025-04-30T03:30:14.713093855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:30:14.714802 containerd[1674]: time="2025-04-30T03:30:14.714381067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:30:14.738478 containerd[1674]: time="2025-04-30T03:30:14.738443592Z" level=info msg="CreateContainer within sandbox \"8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:30:14.772514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3760490483.mount: Deactivated successfully. Apr 30 03:30:14.778297 containerd[1674]: time="2025-04-30T03:30:14.778254873Z" level=info msg="CreateContainer within sandbox \"8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b29d710e270bbcae3fc19474538e15f0e5cfe338b94bda8238e9c9181efe4b87\"" Apr 30 03:30:14.778989 containerd[1674]: time="2025-04-30T03:30:14.778762278Z" level=info msg="StartContainer for \"b29d710e270bbcae3fc19474538e15f0e5cfe338b94bda8238e9c9181efe4b87\"" Apr 30 03:30:14.809765 systemd[1]: Started cri-containerd-b29d710e270bbcae3fc19474538e15f0e5cfe338b94bda8238e9c9181efe4b87.scope - libcontainer container b29d710e270bbcae3fc19474538e15f0e5cfe338b94bda8238e9c9181efe4b87. Apr 30 03:30:14.854694 containerd[1674]: time="2025-04-30T03:30:14.854647707Z" level=info msg="StartContainer for \"b29d710e270bbcae3fc19474538e15f0e5cfe338b94bda8238e9c9181efe4b87\" returns successfully" Apr 30 03:30:15.800751 kubelet[3215]: I0430 03:30:15.796038 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bc9f7b477-z2mdz" podStartSLOduration=35.679966482 podStartE2EDuration="38.796013748s" podCreationTimestamp="2025-04-30 03:29:37 +0000 UTC" firstStartedPulling="2025-04-30 03:30:07.531630067 +0000 UTC m=+49.107021477" lastFinishedPulling="2025-04-30 03:30:10.647677333 +0000 UTC m=+52.223068743" observedRunningTime="2025-04-30 03:30:11.794899234 +0000 UTC m=+53.370290644" watchObservedRunningTime="2025-04-30 03:30:15.796013748 +0000 UTC m=+57.371405158" Apr 30 03:30:15.802334 systemd[1]: run-containerd-runc-k8s.io-b29d710e270bbcae3fc19474538e15f0e5cfe338b94bda8238e9c9181efe4b87-runc.r7sXBn.mount: Deactivated successfully. Apr 30 03:30:15.862308 kubelet[3215]: I0430 03:30:15.861306 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f7478c6c9-ncjxs" podStartSLOduration=33.776407843 podStartE2EDuration="38.861283775s" podCreationTimestamp="2025-04-30 03:29:37 +0000 UTC" firstStartedPulling="2025-04-30 03:30:09.629245233 +0000 UTC m=+51.204636643" lastFinishedPulling="2025-04-30 03:30:14.714121165 +0000 UTC m=+56.289512575" observedRunningTime="2025-04-30 03:30:15.803795323 +0000 UTC m=+57.379186733" watchObservedRunningTime="2025-04-30 03:30:15.861283775 +0000 UTC m=+57.436675185" Apr 30 03:30:16.248833 containerd[1674]: time="2025-04-30T03:30:16.248785896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:16.251633 containerd[1674]: time="2025-04-30T03:30:16.251543123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:30:16.255459 containerd[1674]: time="2025-04-30T03:30:16.255399860Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:16.260773 containerd[1674]: time="2025-04-30T03:30:16.260733311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:16.261718 containerd[1674]: time="2025-04-30T03:30:16.261454718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.547035651s" Apr 30 03:30:16.261718 containerd[1674]: time="2025-04-30T03:30:16.261502118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:30:16.264229 containerd[1674]: time="2025-04-30T03:30:16.264186544Z" level=info msg="CreateContainer within sandbox \"48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:30:16.308800 containerd[1674]: time="2025-04-30T03:30:16.308755272Z" level=info msg="CreateContainer within sandbox \"48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2528cfae875716484f61b68caa409a30875ba031525d9570d4a7033a0c37bf46\"" Apr 30 03:30:16.310000 containerd[1674]: time="2025-04-30T03:30:16.309265177Z" level=info msg="StartContainer for \"2528cfae875716484f61b68caa409a30875ba031525d9570d4a7033a0c37bf46\"" Apr 30 03:30:16.338759 systemd[1]: Started cri-containerd-2528cfae875716484f61b68caa409a30875ba031525d9570d4a7033a0c37bf46.scope - libcontainer container 2528cfae875716484f61b68caa409a30875ba031525d9570d4a7033a0c37bf46. Apr 30 03:30:16.368313 containerd[1674]: time="2025-04-30T03:30:16.368264244Z" level=info msg="StartContainer for \"2528cfae875716484f61b68caa409a30875ba031525d9570d4a7033a0c37bf46\" returns successfully" Apr 30 03:30:16.628537 kubelet[3215]: I0430 03:30:16.628352 3215 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:30:16.628537 kubelet[3215]: I0430 03:30:16.628391 3215 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:30:16.802689 kubelet[3215]: I0430 03:30:16.802627 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kz4tb" podStartSLOduration=33.138785035 podStartE2EDuration="39.802610315s" podCreationTimestamp="2025-04-30 03:29:37 +0000 UTC" firstStartedPulling="2025-04-30 03:30:09.598627448 +0000 UTC m=+51.174018858" lastFinishedPulling="2025-04-30 03:30:16.262452728 +0000 UTC m=+57.837844138" observedRunningTime="2025-04-30 03:30:16.802504714 +0000 UTC m=+58.377896224" watchObservedRunningTime="2025-04-30 03:30:16.802610315 +0000 UTC m=+58.378001725" Apr 30 03:30:18.528432 containerd[1674]: time="2025-04-30T03:30:18.528114287Z" level=info msg="StopPodSandbox for \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\"" Apr 30 03:30:18.562882 kubelet[3215]: I0430 03:30:18.562788 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:18.641459 containerd[1674]: 2025-04-30 03:30:18.566 [WARNING][5680] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0", GenerateName:"calico-apiserver-5bc9f7b477-", Namespace:"calico-apiserver", SelfLink:"", UID:"c307cc16-3906-4e89-a216-d52cd8df720e", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc9f7b477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac", Pod:"calico-apiserver-5bc9f7b477-z2mdz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f248a6691d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:18.641459 containerd[1674]: 2025-04-30 03:30:18.566 [INFO][5680] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:30:18.641459 containerd[1674]: 2025-04-30 03:30:18.566 [INFO][5680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" iface="eth0" netns="" Apr 30 03:30:18.641459 containerd[1674]: 2025-04-30 03:30:18.566 [INFO][5680] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:30:18.641459 containerd[1674]: 2025-04-30 03:30:18.566 [INFO][5680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:30:18.641459 containerd[1674]: 2025-04-30 03:30:18.600 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" HandleID="k8s-pod-network.2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:18.641459 containerd[1674]: 2025-04-30 03:30:18.600 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:18.641459 containerd[1674]: 2025-04-30 03:30:18.600 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:18.641459 containerd[1674]: 2025-04-30 03:30:18.627 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" HandleID="k8s-pod-network.2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:18.641459 containerd[1674]: 2025-04-30 03:30:18.627 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" HandleID="k8s-pod-network.2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:18.641459 containerd[1674]: 2025-04-30 03:30:18.636 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:18.641459 containerd[1674]: 2025-04-30 03:30:18.639 [INFO][5680] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:30:18.641459 containerd[1674]: time="2025-04-30T03:30:18.640750169Z" level=info msg="TearDown network for sandbox \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\" successfully" Apr 30 03:30:18.641459 containerd[1674]: time="2025-04-30T03:30:18.640779369Z" level=info msg="StopPodSandbox for \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\" returns successfully" Apr 30 03:30:18.642540 containerd[1674]: time="2025-04-30T03:30:18.641519376Z" level=info msg="RemovePodSandbox for \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\"" Apr 30 03:30:18.642540 containerd[1674]: time="2025-04-30T03:30:18.641552677Z" level=info msg="Forcibly stopping sandbox \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\"" Apr 30 03:30:18.749901 containerd[1674]: 2025-04-30 03:30:18.699 [WARNING][5709] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0", GenerateName:"calico-apiserver-5bc9f7b477-", Namespace:"calico-apiserver", SelfLink:"", UID:"c307cc16-3906-4e89-a216-d52cd8df720e", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc9f7b477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"d8914502a0d72af866c2e67f0dced61211b841ddb1eec3740e33cfd496a85fac", Pod:"calico-apiserver-5bc9f7b477-z2mdz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f248a6691d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:18.749901 containerd[1674]: 2025-04-30 03:30:18.699 [INFO][5709] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:30:18.749901 containerd[1674]: 2025-04-30 03:30:18.699 [INFO][5709] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" iface="eth0" netns="" Apr 30 03:30:18.749901 containerd[1674]: 2025-04-30 03:30:18.699 [INFO][5709] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:30:18.749901 containerd[1674]: 2025-04-30 03:30:18.699 [INFO][5709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:30:18.749901 containerd[1674]: 2025-04-30 03:30:18.737 [INFO][5716] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" HandleID="k8s-pod-network.2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:18.749901 containerd[1674]: 2025-04-30 03:30:18.737 [INFO][5716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:18.749901 containerd[1674]: 2025-04-30 03:30:18.737 [INFO][5716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:18.749901 containerd[1674]: 2025-04-30 03:30:18.744 [WARNING][5716] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" HandleID="k8s-pod-network.2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:18.749901 containerd[1674]: 2025-04-30 03:30:18.744 [INFO][5716] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" HandleID="k8s-pod-network.2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--z2mdz-eth0" Apr 30 03:30:18.749901 containerd[1674]: 2025-04-30 03:30:18.746 [INFO][5716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:18.749901 containerd[1674]: 2025-04-30 03:30:18.747 [INFO][5709] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63" Apr 30 03:30:18.749901 containerd[1674]: time="2025-04-30T03:30:18.749806316Z" level=info msg="TearDown network for sandbox \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\" successfully" Apr 30 03:30:18.759271 containerd[1674]: time="2025-04-30T03:30:18.759218807Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:18.759407 containerd[1674]: time="2025-04-30T03:30:18.759311608Z" level=info msg="RemovePodSandbox \"2eaada28cd53754e293635f4a732fa5b0604f5af415f018d9b58888956178d63\" returns successfully" Apr 30 03:30:18.760019 containerd[1674]: time="2025-04-30T03:30:18.759940714Z" level=info msg="StopPodSandbox for \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\"" Apr 30 03:30:18.844155 containerd[1674]: 2025-04-30 03:30:18.795 [WARNING][5734] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9774df4c-daf4-44bc-bfa3-9191c38f8346", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8", Pod:"coredns-7db6d8ff4d-gbccc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali051c1897ab9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:18.844155 containerd[1674]: 2025-04-30 03:30:18.795 [INFO][5734] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:30:18.844155 containerd[1674]: 2025-04-30 03:30:18.795 [INFO][5734] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" iface="eth0" netns="" Apr 30 03:30:18.844155 containerd[1674]: 2025-04-30 03:30:18.795 [INFO][5734] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:30:18.844155 containerd[1674]: 2025-04-30 03:30:18.795 [INFO][5734] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:30:18.844155 containerd[1674]: 2025-04-30 03:30:18.829 [INFO][5742] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" HandleID="k8s-pod-network.5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:18.844155 containerd[1674]: 2025-04-30 03:30:18.829 [INFO][5742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:18.844155 containerd[1674]: 2025-04-30 03:30:18.829 [INFO][5742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:18.844155 containerd[1674]: 2025-04-30 03:30:18.837 [WARNING][5742] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" HandleID="k8s-pod-network.5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:18.844155 containerd[1674]: 2025-04-30 03:30:18.838 [INFO][5742] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" HandleID="k8s-pod-network.5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:18.844155 containerd[1674]: 2025-04-30 03:30:18.841 [INFO][5742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:18.844155 containerd[1674]: 2025-04-30 03:30:18.842 [INFO][5734] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:30:18.844155 containerd[1674]: time="2025-04-30T03:30:18.843874320Z" level=info msg="TearDown network for sandbox \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\" successfully" Apr 30 03:30:18.844155 containerd[1674]: time="2025-04-30T03:30:18.843904020Z" level=info msg="StopPodSandbox for \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\" returns successfully" Apr 30 03:30:18.845232 containerd[1674]: time="2025-04-30T03:30:18.845204132Z" level=info msg="RemovePodSandbox for \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\"" Apr 30 03:30:18.845542 containerd[1674]: time="2025-04-30T03:30:18.845401634Z" level=info msg="Forcibly stopping sandbox \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\"" Apr 30 03:30:18.904092 containerd[1674]: 2025-04-30 03:30:18.875 [WARNING][5762] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9774df4c-daf4-44bc-bfa3-9191c38f8346", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"63cfebcf209c7a5b7572290cfd740f91343483e63ce2b3dd09048bd83b4533f8", Pod:"coredns-7db6d8ff4d-gbccc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali051c1897ab9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:18.904092 containerd[1674]: 2025-04-30 03:30:18.876 [INFO][5762] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:30:18.904092 containerd[1674]: 2025-04-30 03:30:18.876 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" iface="eth0" netns="" Apr 30 03:30:18.904092 containerd[1674]: 2025-04-30 03:30:18.876 [INFO][5762] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:30:18.904092 containerd[1674]: 2025-04-30 03:30:18.876 [INFO][5762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:30:18.904092 containerd[1674]: 2025-04-30 03:30:18.895 [INFO][5769] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" HandleID="k8s-pod-network.5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:18.904092 containerd[1674]: 2025-04-30 03:30:18.895 [INFO][5769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:18.904092 containerd[1674]: 2025-04-30 03:30:18.896 [INFO][5769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:18.904092 containerd[1674]: 2025-04-30 03:30:18.900 [WARNING][5769] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" HandleID="k8s-pod-network.5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:18.904092 containerd[1674]: 2025-04-30 03:30:18.901 [INFO][5769] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" HandleID="k8s-pod-network.5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--gbccc-eth0" Apr 30 03:30:18.904092 containerd[1674]: 2025-04-30 03:30:18.902 [INFO][5769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:18.904092 containerd[1674]: 2025-04-30 03:30:18.903 [INFO][5762] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4" Apr 30 03:30:18.904804 containerd[1674]: time="2025-04-30T03:30:18.904175399Z" level=info msg="TearDown network for sandbox \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\" successfully" Apr 30 03:30:18.911081 containerd[1674]: time="2025-04-30T03:30:18.911038265Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:18.911206 containerd[1674]: time="2025-04-30T03:30:18.911107365Z" level=info msg="RemovePodSandbox \"5f50375e6b675d663153ece789624903e59e05c89edcb8c0f7c1b484e2a481a4\" returns successfully" Apr 30 03:30:18.911787 containerd[1674]: time="2025-04-30T03:30:18.911754172Z" level=info msg="StopPodSandbox for \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\"" Apr 30 03:30:18.973393 containerd[1674]: 2025-04-30 03:30:18.944 [WARNING][5787] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a78b558b-dccc-4c43-976d-3e3ed712a212", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487", Pod:"coredns-7db6d8ff4d-msddw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0feb25723c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:18.973393 containerd[1674]: 2025-04-30 03:30:18.944 [INFO][5787] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:30:18.973393 containerd[1674]: 2025-04-30 03:30:18.944 [INFO][5787] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" iface="eth0" netns="" Apr 30 03:30:18.973393 containerd[1674]: 2025-04-30 03:30:18.945 [INFO][5787] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:30:18.973393 containerd[1674]: 2025-04-30 03:30:18.945 [INFO][5787] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:30:18.973393 containerd[1674]: 2025-04-30 03:30:18.965 [INFO][5794] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" HandleID="k8s-pod-network.486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:18.973393 containerd[1674]: 2025-04-30 03:30:18.965 [INFO][5794] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:18.973393 containerd[1674]: 2025-04-30 03:30:18.965 [INFO][5794] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:18.973393 containerd[1674]: 2025-04-30 03:30:18.970 [WARNING][5794] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" HandleID="k8s-pod-network.486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:18.973393 containerd[1674]: 2025-04-30 03:30:18.970 [INFO][5794] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" HandleID="k8s-pod-network.486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:18.973393 containerd[1674]: 2025-04-30 03:30:18.971 [INFO][5794] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:18.973393 containerd[1674]: 2025-04-30 03:30:18.972 [INFO][5787] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:30:18.974397 containerd[1674]: time="2025-04-30T03:30:18.973397464Z" level=info msg="TearDown network for sandbox \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\" successfully" Apr 30 03:30:18.974397 containerd[1674]: time="2025-04-30T03:30:18.973428864Z" level=info msg="StopPodSandbox for \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\" returns successfully" Apr 30 03:30:18.974397 containerd[1674]: time="2025-04-30T03:30:18.974062970Z" level=info msg="RemovePodSandbox for \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\"" Apr 30 03:30:18.974397 containerd[1674]: time="2025-04-30T03:30:18.974097270Z" level=info msg="Forcibly stopping sandbox \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\"" Apr 30 03:30:19.037359 containerd[1674]: 2025-04-30 03:30:19.008 [WARNING][5812] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a78b558b-dccc-4c43-976d-3e3ed712a212", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"bf2b9f2b9fca38babd847633fbc851a9458e88d4823de46010f36faa8564c487", Pod:"coredns-7db6d8ff4d-msddw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0feb25723c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:19.037359 containerd[1674]: 2025-04-30 03:30:19.008 [INFO][5812] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:30:19.037359 containerd[1674]: 2025-04-30 03:30:19.008 [INFO][5812] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" iface="eth0" netns="" Apr 30 03:30:19.037359 containerd[1674]: 2025-04-30 03:30:19.008 [INFO][5812] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:30:19.037359 containerd[1674]: 2025-04-30 03:30:19.008 [INFO][5812] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:30:19.037359 containerd[1674]: 2025-04-30 03:30:19.026 [INFO][5819] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" HandleID="k8s-pod-network.486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:19.037359 containerd[1674]: 2025-04-30 03:30:19.026 [INFO][5819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:19.037359 containerd[1674]: 2025-04-30 03:30:19.026 [INFO][5819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:19.037359 containerd[1674]: 2025-04-30 03:30:19.033 [WARNING][5819] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" HandleID="k8s-pod-network.486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:19.037359 containerd[1674]: 2025-04-30 03:30:19.033 [INFO][5819] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" HandleID="k8s-pod-network.486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Workload="ci--4081.3.3--a--e2728433b6-k8s-coredns--7db6d8ff4d--msddw-eth0" Apr 30 03:30:19.037359 containerd[1674]: 2025-04-30 03:30:19.035 [INFO][5819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:19.037359 containerd[1674]: 2025-04-30 03:30:19.036 [INFO][5812] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8" Apr 30 03:30:19.037359 containerd[1674]: time="2025-04-30T03:30:19.037237377Z" level=info msg="TearDown network for sandbox \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\" successfully" Apr 30 03:30:19.046873 containerd[1674]: time="2025-04-30T03:30:19.046822269Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:19.047075 containerd[1674]: time="2025-04-30T03:30:19.046897569Z" level=info msg="RemovePodSandbox \"486e1e12a9d92fca98e2c381e2a8c72dfd00f44bedad8dd982b57d1bc0d563a8\" returns successfully" Apr 30 03:30:19.047404 containerd[1674]: time="2025-04-30T03:30:19.047372074Z" level=info msg="StopPodSandbox for \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\"" Apr 30 03:30:19.111273 containerd[1674]: 2025-04-30 03:30:19.081 [WARNING][5837] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0", GenerateName:"calico-apiserver-5bc9f7b477-", Namespace:"calico-apiserver", SelfLink:"", UID:"a556b718-e37a-4703-8148-82b2fa7a6e46", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc9f7b477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a", Pod:"calico-apiserver-5bc9f7b477-ksgxv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1caa0b4316b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:19.111273 containerd[1674]: 2025-04-30 03:30:19.081 [INFO][5837] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:30:19.111273 containerd[1674]: 2025-04-30 03:30:19.081 [INFO][5837] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" iface="eth0" netns="" Apr 30 03:30:19.111273 containerd[1674]: 2025-04-30 03:30:19.081 [INFO][5837] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:30:19.111273 containerd[1674]: 2025-04-30 03:30:19.081 [INFO][5837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:30:19.111273 containerd[1674]: 2025-04-30 03:30:19.101 [INFO][5844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" HandleID="k8s-pod-network.491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:19.111273 containerd[1674]: 2025-04-30 03:30:19.101 [INFO][5844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:19.111273 containerd[1674]: 2025-04-30 03:30:19.101 [INFO][5844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:19.111273 containerd[1674]: 2025-04-30 03:30:19.107 [WARNING][5844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" HandleID="k8s-pod-network.491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:19.111273 containerd[1674]: 2025-04-30 03:30:19.107 [INFO][5844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" HandleID="k8s-pod-network.491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:19.111273 containerd[1674]: 2025-04-30 03:30:19.109 [INFO][5844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:19.111273 containerd[1674]: 2025-04-30 03:30:19.110 [INFO][5837] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:30:19.111273 containerd[1674]: time="2025-04-30T03:30:19.111063486Z" level=info msg="TearDown network for sandbox \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\" successfully" Apr 30 03:30:19.111273 containerd[1674]: time="2025-04-30T03:30:19.111097286Z" level=info msg="StopPodSandbox for \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\" returns successfully" Apr 30 03:30:19.113505 containerd[1674]: time="2025-04-30T03:30:19.111780193Z" level=info msg="RemovePodSandbox for \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\"" Apr 30 03:30:19.113505 containerd[1674]: time="2025-04-30T03:30:19.111839593Z" level=info msg="Forcibly stopping sandbox \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\"" Apr 30 03:30:19.175742 containerd[1674]: 2025-04-30 03:30:19.146 [WARNING][5862] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0", GenerateName:"calico-apiserver-5bc9f7b477-", Namespace:"calico-apiserver", SelfLink:"", UID:"a556b718-e37a-4703-8148-82b2fa7a6e46", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc9f7b477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"6e459c9190be798e1e1d35a8129cf37478711754f9c69cc8a11ab598af24d59a", Pod:"calico-apiserver-5bc9f7b477-ksgxv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1caa0b4316b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:19.175742 containerd[1674]: 2025-04-30 03:30:19.146 [INFO][5862] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:30:19.175742 containerd[1674]: 2025-04-30 03:30:19.146 [INFO][5862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" iface="eth0" netns="" Apr 30 03:30:19.175742 containerd[1674]: 2025-04-30 03:30:19.146 [INFO][5862] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:30:19.175742 containerd[1674]: 2025-04-30 03:30:19.146 [INFO][5862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:30:19.175742 containerd[1674]: 2025-04-30 03:30:19.165 [INFO][5869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" HandleID="k8s-pod-network.491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:19.175742 containerd[1674]: 2025-04-30 03:30:19.165 [INFO][5869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:19.175742 containerd[1674]: 2025-04-30 03:30:19.165 [INFO][5869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:19.175742 containerd[1674]: 2025-04-30 03:30:19.171 [WARNING][5869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" HandleID="k8s-pod-network.491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:19.175742 containerd[1674]: 2025-04-30 03:30:19.171 [INFO][5869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" HandleID="k8s-pod-network.491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--apiserver--5bc9f7b477--ksgxv-eth0" Apr 30 03:30:19.175742 containerd[1674]: 2025-04-30 03:30:19.173 [INFO][5869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:19.175742 containerd[1674]: 2025-04-30 03:30:19.174 [INFO][5862] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce" Apr 30 03:30:19.176558 containerd[1674]: time="2025-04-30T03:30:19.175791107Z" level=info msg="TearDown network for sandbox \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\" successfully" Apr 30 03:30:19.184666 containerd[1674]: time="2025-04-30T03:30:19.184610392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:19.184828 containerd[1674]: time="2025-04-30T03:30:19.184688193Z" level=info msg="RemovePodSandbox \"491975400b7604fe5efd845acdab7289ece20c54e979bd6092ffc65cc170f6ce\" returns successfully" Apr 30 03:30:19.185374 containerd[1674]: time="2025-04-30T03:30:19.185337299Z" level=info msg="StopPodSandbox for \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\"" Apr 30 03:30:19.246420 containerd[1674]: 2025-04-30 03:30:19.217 [WARNING][5887] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dcdc5f6d-cefa-4e15-8498-441a243c70ee", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942", Pod:"csi-node-driver-kz4tb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calief4a0becc64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:19.246420 containerd[1674]: 2025-04-30 03:30:19.217 [INFO][5887] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:30:19.246420 containerd[1674]: 2025-04-30 03:30:19.217 [INFO][5887] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" iface="eth0" netns="" Apr 30 03:30:19.246420 containerd[1674]: 2025-04-30 03:30:19.217 [INFO][5887] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:30:19.246420 containerd[1674]: 2025-04-30 03:30:19.217 [INFO][5887] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:30:19.246420 containerd[1674]: 2025-04-30 03:30:19.237 [INFO][5895] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" HandleID="k8s-pod-network.74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Workload="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:19.246420 containerd[1674]: 2025-04-30 03:30:19.237 [INFO][5895] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:19.246420 containerd[1674]: 2025-04-30 03:30:19.237 [INFO][5895] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:19.246420 containerd[1674]: 2025-04-30 03:30:19.243 [WARNING][5895] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" HandleID="k8s-pod-network.74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Workload="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:19.246420 containerd[1674]: 2025-04-30 03:30:19.243 [INFO][5895] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" HandleID="k8s-pod-network.74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Workload="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:19.246420 containerd[1674]: 2025-04-30 03:30:19.244 [INFO][5895] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:19.246420 containerd[1674]: 2025-04-30 03:30:19.245 [INFO][5887] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:30:19.247111 containerd[1674]: time="2025-04-30T03:30:19.246472286Z" level=info msg="TearDown network for sandbox \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\" successfully" Apr 30 03:30:19.247111 containerd[1674]: time="2025-04-30T03:30:19.246548387Z" level=info msg="StopPodSandbox for \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\" returns successfully" Apr 30 03:30:19.247265 containerd[1674]: time="2025-04-30T03:30:19.247230794Z" level=info msg="RemovePodSandbox for \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\"" Apr 30 03:30:19.247341 containerd[1674]: time="2025-04-30T03:30:19.247275694Z" level=info msg="Forcibly stopping sandbox \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\"" Apr 30 03:30:19.308814 containerd[1674]: 2025-04-30 03:30:19.280 [WARNING][5913] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dcdc5f6d-cefa-4e15-8498-441a243c70ee", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"48a6854503ef8652548d807c34c62f208ace29c565b30659ac42b6e12d90e942", Pod:"csi-node-driver-kz4tb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calief4a0becc64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:19.308814 containerd[1674]: 2025-04-30 03:30:19.280 [INFO][5913] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:30:19.308814 containerd[1674]: 2025-04-30 03:30:19.280 [INFO][5913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" iface="eth0" netns="" Apr 30 03:30:19.308814 containerd[1674]: 2025-04-30 03:30:19.280 [INFO][5913] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:30:19.308814 containerd[1674]: 2025-04-30 03:30:19.280 [INFO][5913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:30:19.308814 containerd[1674]: 2025-04-30 03:30:19.299 [INFO][5920] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" HandleID="k8s-pod-network.74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Workload="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:19.308814 containerd[1674]: 2025-04-30 03:30:19.299 [INFO][5920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:19.308814 containerd[1674]: 2025-04-30 03:30:19.299 [INFO][5920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:19.308814 containerd[1674]: 2025-04-30 03:30:19.304 [WARNING][5920] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" HandleID="k8s-pod-network.74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Workload="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:19.308814 containerd[1674]: 2025-04-30 03:30:19.304 [INFO][5920] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" HandleID="k8s-pod-network.74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Workload="ci--4081.3.3--a--e2728433b6-k8s-csi--node--driver--kz4tb-eth0" Apr 30 03:30:19.308814 containerd[1674]: 2025-04-30 03:30:19.306 [INFO][5920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:19.308814 containerd[1674]: 2025-04-30 03:30:19.307 [INFO][5913] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742" Apr 30 03:30:19.309483 containerd[1674]: time="2025-04-30T03:30:19.308811185Z" level=info msg="TearDown network for sandbox \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\" successfully" Apr 30 03:30:19.317440 containerd[1674]: time="2025-04-30T03:30:19.317396067Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:19.317562 containerd[1674]: time="2025-04-30T03:30:19.317463968Z" level=info msg="RemovePodSandbox \"74a474b8b63ca35b7e5a41921430c6dfaee3f7075380433689c0489650dbb742\" returns successfully" Apr 30 03:30:19.318022 containerd[1674]: time="2025-04-30T03:30:19.317990573Z" level=info msg="StopPodSandbox for \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\"" Apr 30 03:30:19.383246 containerd[1674]: 2025-04-30 03:30:19.348 [WARNING][5938] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0", GenerateName:"calico-kube-controllers-f7478c6c9-", Namespace:"calico-system", SelfLink:"", UID:"a95e4c6a-c6b4-4ac1-a191-4c74878eb86f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7478c6c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f", Pod:"calico-kube-controllers-f7478c6c9-ncjxs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidaecf4f284b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:19.383246 containerd[1674]: 2025-04-30 03:30:19.348 [INFO][5938] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:30:19.383246 containerd[1674]: 2025-04-30 03:30:19.348 [INFO][5938] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" iface="eth0" netns="" Apr 30 03:30:19.383246 containerd[1674]: 2025-04-30 03:30:19.348 [INFO][5938] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:30:19.383246 containerd[1674]: 2025-04-30 03:30:19.348 [INFO][5938] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:30:19.383246 containerd[1674]: 2025-04-30 03:30:19.371 [INFO][5945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" HandleID="k8s-pod-network.a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:19.383246 containerd[1674]: 2025-04-30 03:30:19.371 [INFO][5945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:19.383246 containerd[1674]: 2025-04-30 03:30:19.371 [INFO][5945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:19.383246 containerd[1674]: 2025-04-30 03:30:19.376 [WARNING][5945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" HandleID="k8s-pod-network.a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:19.383246 containerd[1674]: 2025-04-30 03:30:19.376 [INFO][5945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" HandleID="k8s-pod-network.a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:19.383246 containerd[1674]: 2025-04-30 03:30:19.377 [INFO][5945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:19.383246 containerd[1674]: 2025-04-30 03:30:19.379 [INFO][5938] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:30:19.383246 containerd[1674]: time="2025-04-30T03:30:19.381642584Z" level=info msg="TearDown network for sandbox \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\" successfully" Apr 30 03:30:19.383246 containerd[1674]: time="2025-04-30T03:30:19.381685285Z" level=info msg="StopPodSandbox for \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\" returns successfully" Apr 30 03:30:19.385119 containerd[1674]: time="2025-04-30T03:30:19.383768205Z" level=info msg="RemovePodSandbox for \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\"" Apr 30 03:30:19.385119 containerd[1674]: time="2025-04-30T03:30:19.383813305Z" level=info msg="Forcibly stopping sandbox \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\"" Apr 30 03:30:19.444492 containerd[1674]: 2025-04-30 03:30:19.417 [WARNING][5963] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0", GenerateName:"calico-kube-controllers-f7478c6c9-", Namespace:"calico-system", SelfLink:"", UID:"a95e4c6a-c6b4-4ac1-a191-4c74878eb86f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7478c6c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-e2728433b6", ContainerID:"8a7dceb9f4629b625af1311d645352544ef8fcdff6cd58911de5a88f2a2f974f", Pod:"calico-kube-controllers-f7478c6c9-ncjxs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidaecf4f284b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:19.444492 containerd[1674]: 2025-04-30 03:30:19.417 [INFO][5963] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:30:19.444492 containerd[1674]: 2025-04-30 03:30:19.417 [INFO][5963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" iface="eth0" netns="" Apr 30 03:30:19.444492 containerd[1674]: 2025-04-30 03:30:19.417 [INFO][5963] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:30:19.444492 containerd[1674]: 2025-04-30 03:30:19.417 [INFO][5963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:30:19.444492 containerd[1674]: 2025-04-30 03:30:19.435 [INFO][5970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" HandleID="k8s-pod-network.a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:19.444492 containerd[1674]: 2025-04-30 03:30:19.435 [INFO][5970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:19.444492 containerd[1674]: 2025-04-30 03:30:19.435 [INFO][5970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:19.444492 containerd[1674]: 2025-04-30 03:30:19.441 [WARNING][5970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" HandleID="k8s-pod-network.a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:19.444492 containerd[1674]: 2025-04-30 03:30:19.441 [INFO][5970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" HandleID="k8s-pod-network.a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Workload="ci--4081.3.3--a--e2728433b6-k8s-calico--kube--controllers--f7478c6c9--ncjxs-eth0" Apr 30 03:30:19.444492 containerd[1674]: 2025-04-30 03:30:19.442 [INFO][5970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:19.444492 containerd[1674]: 2025-04-30 03:30:19.443 [INFO][5963] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986" Apr 30 03:30:19.445349 containerd[1674]: time="2025-04-30T03:30:19.444524988Z" level=info msg="TearDown network for sandbox \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\" successfully" Apr 30 03:30:19.452638 containerd[1674]: time="2025-04-30T03:30:19.452569066Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:19.452749 containerd[1674]: time="2025-04-30T03:30:19.452670667Z" level=info msg="RemovePodSandbox \"a4dc42809c422f405d7fed48a2c9a267180bf8674f9071c0e0f462f504633986\" returns successfully" Apr 30 03:30:21.414725 kubelet[3215]: I0430 03:30:21.413955 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:58.587922 systemd[1]: Started sshd@7-10.200.8.4:22-10.200.16.10:40894.service - OpenSSH per-connection server daemon (10.200.16.10:40894). Apr 30 03:30:59.222309 sshd[6060]: Accepted publickey for core from 10.200.16.10 port 40894 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:30:59.223965 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:59.229826 systemd-logind[1656]: New session 10 of user core. Apr 30 03:30:59.235766 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:30:59.726244 sshd[6060]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:59.731327 systemd[1]: sshd@7-10.200.8.4:22-10.200.16.10:40894.service: Deactivated successfully. Apr 30 03:30:59.734357 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:30:59.735294 systemd-logind[1656]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:30:59.736998 systemd-logind[1656]: Removed session 10. Apr 30 03:31:04.845245 systemd[1]: Started sshd@8-10.200.8.4:22-10.200.16.10:36144.service - OpenSSH per-connection server daemon (10.200.16.10:36144). Apr 30 03:31:05.466280 sshd[6098]: Accepted publickey for core from 10.200.16.10 port 36144 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:05.466911 sshd[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:05.471983 systemd-logind[1656]: New session 11 of user core. Apr 30 03:31:05.475765 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:31:05.973627 sshd[6098]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:05.977622 systemd[1]: sshd@8-10.200.8.4:22-10.200.16.10:36144.service: Deactivated successfully. Apr 30 03:31:05.979772 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:31:05.981018 systemd-logind[1656]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:31:05.982717 systemd-logind[1656]: Removed session 11. Apr 30 03:31:11.089735 systemd[1]: Started sshd@9-10.200.8.4:22-10.200.16.10:48372.service - OpenSSH per-connection server daemon (10.200.16.10:48372). Apr 30 03:31:11.723957 sshd[6112]: Accepted publickey for core from 10.200.16.10 port 48372 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:11.726130 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:11.730693 systemd-logind[1656]: New session 12 of user core. Apr 30 03:31:11.734764 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:31:12.229458 sshd[6112]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:12.233388 systemd[1]: sshd@9-10.200.8.4:22-10.200.16.10:48372.service: Deactivated successfully. Apr 30 03:31:12.235953 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:31:12.236724 systemd-logind[1656]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:31:12.237837 systemd-logind[1656]: Removed session 12. Apr 30 03:31:12.342898 systemd[1]: Started sshd@10-10.200.8.4:22-10.200.16.10:48388.service - OpenSSH per-connection server daemon (10.200.16.10:48388). Apr 30 03:31:12.963875 sshd[6126]: Accepted publickey for core from 10.200.16.10 port 48388 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:12.965578 sshd[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:12.970429 systemd-logind[1656]: New session 13 of user core. Apr 30 03:31:12.977760 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:31:13.513048 sshd[6126]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:13.517099 systemd[1]: sshd@10-10.200.8.4:22-10.200.16.10:48388.service: Deactivated successfully. Apr 30 03:31:13.519313 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:31:13.520071 systemd-logind[1656]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:31:13.521123 systemd-logind[1656]: Removed session 13. Apr 30 03:31:13.625059 systemd[1]: Started sshd@11-10.200.8.4:22-10.200.16.10:48396.service - OpenSSH per-connection server daemon (10.200.16.10:48396). Apr 30 03:31:14.253632 sshd[6136]: Accepted publickey for core from 10.200.16.10 port 48396 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:14.255159 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:14.260078 systemd-logind[1656]: New session 14 of user core. Apr 30 03:31:14.275103 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:31:14.753509 sshd[6136]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:14.758195 systemd[1]: sshd@11-10.200.8.4:22-10.200.16.10:48396.service: Deactivated successfully. Apr 30 03:31:14.760506 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:31:14.761472 systemd-logind[1656]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:31:14.762722 systemd-logind[1656]: Removed session 14. Apr 30 03:31:19.866819 systemd[1]: Started sshd@12-10.200.8.4:22-10.200.16.10:47710.service - OpenSSH per-connection server daemon (10.200.16.10:47710). Apr 30 03:31:20.497176 sshd[6173]: Accepted publickey for core from 10.200.16.10 port 47710 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:20.499026 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:20.504027 systemd-logind[1656]: New session 15 of user core. Apr 30 03:31:20.508762 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:31:20.996357 sshd[6173]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:20.999312 systemd[1]: sshd@12-10.200.8.4:22-10.200.16.10:47710.service: Deactivated successfully. Apr 30 03:31:21.001584 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:31:21.003259 systemd-logind[1656]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:31:21.004796 systemd-logind[1656]: Removed session 15. Apr 30 03:31:26.112936 systemd[1]: Started sshd@13-10.200.8.4:22-10.200.16.10:47718.service - OpenSSH per-connection server daemon (10.200.16.10:47718). Apr 30 03:31:26.745359 sshd[6206]: Accepted publickey for core from 10.200.16.10 port 47718 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:26.746936 sshd[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:26.751549 systemd-logind[1656]: New session 16 of user core. Apr 30 03:31:26.756751 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:31:27.253803 sshd[6206]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:27.257006 systemd[1]: sshd@13-10.200.8.4:22-10.200.16.10:47718.service: Deactivated successfully. Apr 30 03:31:27.259355 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:31:27.260969 systemd-logind[1656]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:31:27.262391 systemd-logind[1656]: Removed session 16. Apr 30 03:31:32.368913 systemd[1]: Started sshd@14-10.200.8.4:22-10.200.16.10:33452.service - OpenSSH per-connection server daemon (10.200.16.10:33452). Apr 30 03:31:33.001737 sshd[6229]: Accepted publickey for core from 10.200.16.10 port 33452 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:33.003402 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:33.008383 systemd-logind[1656]: New session 17 of user core. Apr 30 03:31:33.017090 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:31:33.534560 sshd[6229]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:33.539191 systemd[1]: sshd@14-10.200.8.4:22-10.200.16.10:33452.service: Deactivated successfully. Apr 30 03:31:33.541542 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:31:33.542353 systemd-logind[1656]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:31:33.543678 systemd-logind[1656]: Removed session 17. Apr 30 03:31:33.644430 systemd[1]: Started sshd@15-10.200.8.4:22-10.200.16.10:33466.service - OpenSSH per-connection server daemon (10.200.16.10:33466). Apr 30 03:31:34.272636 sshd[6263]: Accepted publickey for core from 10.200.16.10 port 33466 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:34.273736 sshd[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:34.278438 systemd-logind[1656]: New session 18 of user core. Apr 30 03:31:34.285775 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:31:34.838240 sshd[6263]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:34.845691 systemd-logind[1656]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:31:34.846902 systemd[1]: sshd@15-10.200.8.4:22-10.200.16.10:33466.service: Deactivated successfully. Apr 30 03:31:34.848835 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:31:34.850056 systemd-logind[1656]: Removed session 18. Apr 30 03:31:34.953894 systemd[1]: Started sshd@16-10.200.8.4:22-10.200.16.10:33474.service - OpenSSH per-connection server daemon (10.200.16.10:33474). Apr 30 03:31:35.575823 sshd[6273]: Accepted publickey for core from 10.200.16.10 port 33474 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:35.577410 sshd[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:35.582349 systemd-logind[1656]: New session 19 of user core. Apr 30 03:31:35.586797 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:31:37.961068 sshd[6273]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:37.964387 systemd[1]: sshd@16-10.200.8.4:22-10.200.16.10:33474.service: Deactivated successfully. Apr 30 03:31:37.966812 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:31:37.968743 systemd-logind[1656]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:31:37.969819 systemd-logind[1656]: Removed session 19. Apr 30 03:31:38.071831 systemd[1]: Started sshd@17-10.200.8.4:22-10.200.16.10:33482.service - OpenSSH per-connection server daemon (10.200.16.10:33482). Apr 30 03:31:38.709443 sshd[6292]: Accepted publickey for core from 10.200.16.10 port 33482 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:38.711099 sshd[6292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:38.715660 systemd-logind[1656]: New session 20 of user core. Apr 30 03:31:38.722960 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:31:39.393803 sshd[6292]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:39.396870 systemd[1]: sshd@17-10.200.8.4:22-10.200.16.10:33482.service: Deactivated successfully. Apr 30 03:31:39.399233 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:31:39.401100 systemd-logind[1656]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:31:39.402285 systemd-logind[1656]: Removed session 20. Apr 30 03:31:39.504823 systemd[1]: Started sshd@18-10.200.8.4:22-10.200.16.10:48562.service - OpenSSH per-connection server daemon (10.200.16.10:48562). Apr 30 03:31:40.133235 sshd[6303]: Accepted publickey for core from 10.200.16.10 port 48562 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:40.134951 sshd[6303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:40.139898 systemd-logind[1656]: New session 21 of user core. Apr 30 03:31:40.146751 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:31:40.632025 sshd[6303]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:40.636568 systemd[1]: sshd@18-10.200.8.4:22-10.200.16.10:48562.service: Deactivated successfully. Apr 30 03:31:40.638983 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:31:40.639774 systemd-logind[1656]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:31:40.640770 systemd-logind[1656]: Removed session 21. Apr 30 03:31:45.747897 systemd[1]: Started sshd@19-10.200.8.4:22-10.200.16.10:48576.service - OpenSSH per-connection server daemon (10.200.16.10:48576). Apr 30 03:31:46.368617 sshd[6333]: Accepted publickey for core from 10.200.16.10 port 48576 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:46.370647 sshd[6333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:46.374661 systemd-logind[1656]: New session 22 of user core. Apr 30 03:31:46.380774 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:31:46.865133 sshd[6333]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:46.869131 systemd[1]: sshd@19-10.200.8.4:22-10.200.16.10:48576.service: Deactivated successfully. Apr 30 03:31:46.871228 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:31:46.872193 systemd-logind[1656]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:31:46.873129 systemd-logind[1656]: Removed session 22. Apr 30 03:31:51.979420 systemd[1]: Started sshd@20-10.200.8.4:22-10.200.16.10:50236.service - OpenSSH per-connection server daemon (10.200.16.10:50236). Apr 30 03:31:52.606509 sshd[6368]: Accepted publickey for core from 10.200.16.10 port 50236 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:52.608102 sshd[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:52.613639 systemd-logind[1656]: New session 23 of user core. Apr 30 03:31:52.616782 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:31:53.111419 sshd[6368]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:53.114438 systemd[1]: sshd@20-10.200.8.4:22-10.200.16.10:50236.service: Deactivated successfully. Apr 30 03:31:53.116777 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:31:53.118306 systemd-logind[1656]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:31:53.119469 systemd-logind[1656]: Removed session 23. Apr 30 03:31:58.224932 systemd[1]: Started sshd@21-10.200.8.4:22-10.200.16.10:50242.service - OpenSSH per-connection server daemon (10.200.16.10:50242). Apr 30 03:31:58.844845 sshd[6380]: Accepted publickey for core from 10.200.16.10 port 50242 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:58.846929 sshd[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:58.852457 systemd-logind[1656]: New session 24 of user core. Apr 30 03:31:58.855750 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:31:59.340223 sshd[6380]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:59.344329 systemd[1]: sshd@21-10.200.8.4:22-10.200.16.10:50242.service: Deactivated successfully. Apr 30 03:31:59.346748 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:31:59.347521 systemd-logind[1656]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:31:59.348935 systemd-logind[1656]: Removed session 24. Apr 30 03:32:04.456910 systemd[1]: Started sshd@22-10.200.8.4:22-10.200.16.10:37544.service - OpenSSH per-connection server daemon (10.200.16.10:37544). Apr 30 03:32:05.077092 sshd[6418]: Accepted publickey for core from 10.200.16.10 port 37544 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:32:05.078956 sshd[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:05.083996 systemd-logind[1656]: New session 25 of user core. Apr 30 03:32:05.090780 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:32:05.582777 sshd[6418]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:05.586680 systemd[1]: sshd@22-10.200.8.4:22-10.200.16.10:37544.service: Deactivated successfully. Apr 30 03:32:05.588788 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:32:05.589557 systemd-logind[1656]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:32:05.590689 systemd-logind[1656]: Removed session 25. Apr 30 03:32:10.694798 systemd[1]: Started sshd@23-10.200.8.4:22-10.200.16.10:33048.service - OpenSSH per-connection server daemon (10.200.16.10:33048). Apr 30 03:32:11.326983 sshd[6431]: Accepted publickey for core from 10.200.16.10 port 33048 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:32:11.328611 sshd[6431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:11.333113 systemd-logind[1656]: New session 26 of user core. Apr 30 03:32:11.336774 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:32:11.823021 sshd[6431]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:11.825969 systemd[1]: sshd@23-10.200.8.4:22-10.200.16.10:33048.service: Deactivated successfully. Apr 30 03:32:11.828361 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:32:11.830050 systemd-logind[1656]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:32:11.831516 systemd-logind[1656]: Removed session 26. Apr 30 03:32:16.947265 systemd[1]: Started sshd@24-10.200.8.4:22-10.200.16.10:33060.service - OpenSSH per-connection server daemon (10.200.16.10:33060). Apr 30 03:32:17.571639 sshd[6444]: Accepted publickey for core from 10.200.16.10 port 33060 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:32:17.573225 sshd[6444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:17.577923 systemd-logind[1656]: New session 27 of user core. Apr 30 03:32:17.581761 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 03:32:18.069488 sshd[6444]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:18.074124 systemd[1]: sshd@24-10.200.8.4:22-10.200.16.10:33060.service: Deactivated successfully. Apr 30 03:32:18.076815 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 03:32:18.077956 systemd-logind[1656]: Session 27 logged out. Waiting for processes to exit. Apr 30 03:32:18.079156 systemd-logind[1656]: Removed session 27. Apr 30 03:32:21.924827 kubelet[3215]: E0430 03:32:21.924705 3215 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: EOF"