Apr 30 03:30:13.109221 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:30:13.109261 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:30:13.109277 kernel: BIOS-provided physical RAM map: Apr 30 03:30:13.109288 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:30:13.109299 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 30 03:30:13.109310 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Apr 30 03:30:13.109323 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Apr 30 03:30:13.109337 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Apr 30 03:30:13.109349 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 30 03:30:13.109361 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 30 03:30:13.109373 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 30 03:30:13.109384 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 30 03:30:13.109396 kernel: printk: bootconsole [earlyser0] enabled Apr 30 03:30:13.109407 kernel: NX (Execute Disable) protection: active Apr 30 03:30:13.109425 kernel: APIC: Static calls initialized Apr 30 03:30:13.109438 kernel: efi: EFI v2.7 by Microsoft Apr 30 03:30:13.109451 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Apr 30 03:30:13.109464 kernel: SMBIOS 3.1.0 present. Apr 30 03:30:13.109477 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Apr 30 03:30:13.109489 kernel: Hypervisor detected: Microsoft Hyper-V Apr 30 03:30:13.109502 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 30 03:30:13.109515 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Apr 30 03:30:13.109528 kernel: Hyper-V: Nested features: 0x1e0101 Apr 30 03:30:13.109540 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 30 03:30:13.109556 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 30 03:30:13.109569 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:30:13.109582 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:30:13.109596 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 30 03:30:13.109609 kernel: tsc: Detected 2593.906 MHz processor Apr 30 03:30:13.109622 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:30:13.109636 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:30:13.109649 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 30 03:30:13.109662 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:30:13.109677 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:30:13.109690 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 30 03:30:13.109703 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 30 03:30:13.109715 kernel: Using GB pages for direct mapping Apr 30 03:30:13.109729 kernel: Secure boot disabled Apr 30 03:30:13.109743 kernel: ACPI: Early table checksum verification disabled Apr 30 03:30:13.109756 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 30 03:30:13.109775 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109792 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109805 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Apr 30 03:30:13.109819 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 30 03:30:13.109834 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109848 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109862 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109879 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109893 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109907 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109921 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109935 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 30 03:30:13.109949 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Apr 30 03:30:13.109963 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 30 03:30:13.109978 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 30 03:30:13.109994 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 30 03:30:13.110008 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 30 03:30:13.110023 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 30 03:30:13.110037 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Apr 30 03:30:13.110051 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 30 03:30:13.110065 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Apr 30 03:30:13.110392 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:30:13.110407 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:30:13.110421 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 30 03:30:13.110440 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 30 03:30:13.110454 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 30 03:30:13.110468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 30 03:30:13.110483 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 30 03:30:13.110497 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 30 03:30:13.110511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 30 03:30:13.110525 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 30 03:30:13.110540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 30 03:30:13.110555 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 30 03:30:13.110572 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Apr 30 03:30:13.110586 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Apr 30 03:30:13.110601 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Apr 30 03:30:13.110615 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Apr 30 03:30:13.110629 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Apr 30 03:30:13.110643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Apr 30 03:30:13.110657 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 30 03:30:13.110672 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 30 03:30:13.110685 kernel: Zone ranges: Apr 30 03:30:13.110702 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:30:13.110716 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 03:30:13.110730 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:30:13.110745 kernel: Movable zone start for each node Apr 30 03:30:13.110759 kernel: Early memory node ranges Apr 30 03:30:13.110773 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:30:13.110787 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Apr 30 03:30:13.110801 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 30 03:30:13.110816 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:30:13.110833 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 30 03:30:13.110847 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:30:13.110861 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:30:13.110875 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Apr 30 03:30:13.110889 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 30 03:30:13.110903 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 30 03:30:13.110917 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:30:13.110932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:30:13.110946 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:30:13.110963 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 30 03:30:13.110978 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:30:13.110991 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 30 03:30:13.111006 kernel: Booting paravirtualized kernel on Hyper-V Apr 30 03:30:13.111020 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:30:13.111034 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:30:13.111048 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:30:13.111062 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:30:13.111092 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:30:13.111109 kernel: Hyper-V: PV spinlocks enabled Apr 30 03:30:13.111123 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:30:13.111139 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:30:13.111154 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:30:13.111168 kernel: random: crng init done Apr 30 03:30:13.111182 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 30 03:30:13.111196 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:30:13.111210 kernel: Fallback order for Node 0: 0 Apr 30 03:30:13.111227 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Apr 30 03:30:13.111252 kernel: Policy zone: Normal Apr 30 03:30:13.111270 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:30:13.111285 kernel: software IO TLB: area num 2. Apr 30 03:30:13.111300 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 310124K reserved, 0K cma-reserved) Apr 30 03:30:13.111316 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:30:13.111331 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:30:13.111346 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:30:13.111361 kernel: Dynamic Preempt: voluntary Apr 30 03:30:13.111377 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:30:13.111393 kernel: rcu: RCU event tracing is enabled. Apr 30 03:30:13.111411 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:30:13.111426 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:30:13.111442 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:30:13.111457 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:30:13.111473 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:30:13.111491 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:30:13.111505 kernel: Using NULL legacy PIC Apr 30 03:30:13.111520 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 30 03:30:13.111535 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:30:13.111550 kernel: Console: colour dummy device 80x25 Apr 30 03:30:13.111566 kernel: printk: console [tty1] enabled Apr 30 03:30:13.111581 kernel: printk: console [ttyS0] enabled Apr 30 03:30:13.111596 kernel: printk: bootconsole [earlyser0] disabled Apr 30 03:30:13.111611 kernel: ACPI: Core revision 20230628 Apr 30 03:30:13.111627 kernel: Failed to register legacy timer interrupt Apr 30 03:30:13.111645 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:30:13.111660 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 03:30:13.111675 kernel: Hyper-V: Using IPI hypercalls Apr 30 03:30:13.111690 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 30 03:30:13.111706 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 30 03:30:13.111722 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 30 03:30:13.111738 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 30 03:30:13.111753 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 30 03:30:13.111768 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 30 03:30:13.111787 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Apr 30 03:30:13.111802 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:30:13.111817 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:30:13.111833 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:30:13.111848 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:30:13.111863 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:30:13.111877 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:30:13.111893 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 03:30:13.111908 kernel: RETBleed: Vulnerable Apr 30 03:30:13.111925 kernel: Speculative Store Bypass: Vulnerable Apr 30 03:30:13.111940 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:30:13.111955 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:30:13.111970 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:30:13.111985 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:30:13.112000 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:30:13.112015 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 03:30:13.112030 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 03:30:13.112046 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 03:30:13.112061 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:30:13.112084 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 30 03:30:13.118964 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 30 03:30:13.118988 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 30 03:30:13.119003 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 30 03:30:13.119018 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:30:13.119033 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:30:13.119048 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:30:13.119062 kernel: landlock: Up and running. Apr 30 03:30:13.119087 kernel: SELinux: Initializing. Apr 30 03:30:13.119102 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:30:13.119117 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:30:13.119132 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 03:30:13.119145 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:30:13.119167 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:30:13.119179 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:30:13.119194 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 03:30:13.119207 kernel: signal: max sigframe size: 3632 Apr 30 03:30:13.119218 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:30:13.119232 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:30:13.119245 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:30:13.119259 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:30:13.119273 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:30:13.119291 kernel: .... node #0, CPUs: #1 Apr 30 03:30:13.119306 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 30 03:30:13.119324 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:30:13.119344 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:30:13.119358 kernel: smpboot: Max logical packages: 1 Apr 30 03:30:13.119370 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 30 03:30:13.119382 kernel: devtmpfs: initialized Apr 30 03:30:13.119396 kernel: x86/mm: Memory block size: 128MB Apr 30 03:30:13.119412 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 30 03:30:13.119427 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:30:13.119442 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:30:13.119457 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:30:13.119472 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:30:13.119485 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:30:13.119498 kernel: audit: type=2000 audit(1745983811.028:1): state=initialized audit_enabled=0 res=1 Apr 30 03:30:13.119511 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:30:13.119523 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:30:13.119539 kernel: cpuidle: using governor menu Apr 30 03:30:13.119554 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:30:13.119568 kernel: dca service started, version 1.12.1 Apr 30 03:30:13.119582 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Apr 30 03:30:13.119602 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:30:13.119614 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:30:13.119627 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:30:13.119640 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:30:13.119654 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:30:13.119672 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:30:13.119687 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:30:13.119702 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:30:13.119715 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:30:13.119734 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:30:13.119748 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:30:13.119762 kernel: ACPI: Interpreter enabled Apr 30 03:30:13.119775 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:30:13.119788 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:30:13.119813 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:30:13.119829 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 03:30:13.119841 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 30 03:30:13.119854 kernel: iommu: Default domain type: Translated Apr 30 03:30:13.119868 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:30:13.119882 kernel: efivars: Registered efivars operations Apr 30 03:30:13.119897 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:30:13.119912 kernel: PCI: System does not support PCI Apr 30 03:30:13.119926 kernel: vgaarb: loaded Apr 30 03:30:13.119944 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 30 03:30:13.119959 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:30:13.119975 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:30:13.119990 kernel: pnp: PnP ACPI init Apr 30 03:30:13.120006 kernel: pnp: PnP ACPI: found 3 devices Apr 30 03:30:13.120021 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:30:13.120035 kernel: NET: Registered PF_INET protocol family Apr 30 03:30:13.120050 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:30:13.120063 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 30 03:30:13.120093 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:30:13.120106 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:30:13.120122 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 03:30:13.120136 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 30 03:30:13.120151 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:30:13.120165 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:30:13.120179 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:30:13.120194 kernel: NET: Registered PF_XDP protocol family Apr 30 03:30:13.120207 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:30:13.120224 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 03:30:13.120239 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Apr 30 03:30:13.120253 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:30:13.120267 kernel: Initialise system trusted keyrings Apr 30 03:30:13.120281 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 30 03:30:13.120295 kernel: Key type asymmetric registered Apr 30 03:30:13.120309 kernel: Asymmetric key parser 'x509' registered Apr 30 03:30:13.120322 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:30:13.120337 kernel: io scheduler mq-deadline registered Apr 30 03:30:13.120354 kernel: io scheduler kyber registered Apr 30 03:30:13.120368 kernel: io scheduler bfq registered Apr 30 03:30:13.120382 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:30:13.120397 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:30:13.120410 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:30:13.120425 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 03:30:13.120440 kernel: i8042: PNP: No PS/2 controller found. Apr 30 03:30:13.120629 kernel: rtc_cmos 00:02: registered as rtc0 Apr 30 03:30:13.120751 kernel: rtc_cmos 00:02: setting system clock to 2025-04-30T03:30:12 UTC (1745983812) Apr 30 03:30:13.120870 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 30 03:30:13.120889 kernel: intel_pstate: CPU model not supported Apr 30 03:30:13.120908 kernel: efifb: probing for efifb Apr 30 03:30:13.120933 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 03:30:13.120948 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 03:30:13.120960 kernel: efifb: scrolling: redraw Apr 30 03:30:13.120974 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 03:30:13.120996 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:30:13.121009 kernel: fb0: EFI VGA frame buffer device Apr 30 03:30:13.121022 kernel: pstore: Using crash dump compression: deflate Apr 30 03:30:13.121034 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:30:13.121047 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:30:13.121059 kernel: Segment Routing with IPv6 Apr 30 03:30:13.121092 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:30:13.121104 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:30:13.121116 kernel: Key type dns_resolver registered Apr 30 03:30:13.121134 kernel: IPI shorthand broadcast: enabled Apr 30 03:30:13.121146 kernel: sched_clock: Marking stable (958006200, 47968700)->(1230181100, -224206200) Apr 30 03:30:13.121159 kernel: registered taskstats version 1 Apr 30 03:30:13.121171 kernel: Loading compiled-in X.509 certificates Apr 30 03:30:13.121186 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:30:13.121200 kernel: Key type .fscrypt registered Apr 30 03:30:13.121213 kernel: Key type fscrypt-provisioning registered Apr 30 03:30:13.121228 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:30:13.121241 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:30:13.121258 kernel: ima: No architecture policies found Apr 30 03:30:13.121270 kernel: clk: Disabling unused clocks Apr 30 03:30:13.121282 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:30:13.121294 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:30:13.121308 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:30:13.121322 kernel: Run /init as init process Apr 30 03:30:13.121334 kernel: with arguments: Apr 30 03:30:13.121346 kernel: /init Apr 30 03:30:13.121360 kernel: with environment: Apr 30 03:30:13.121376 kernel: HOME=/ Apr 30 03:30:13.121389 kernel: TERM=linux Apr 30 03:30:13.121403 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:30:13.121421 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:30:13.121439 systemd[1]: Detected virtualization microsoft. Apr 30 03:30:13.121453 systemd[1]: Detected architecture x86-64. Apr 30 03:30:13.121467 systemd[1]: Running in initrd. Apr 30 03:30:13.121480 systemd[1]: No hostname configured, using default hostname. Apr 30 03:30:13.121498 systemd[1]: Hostname set to . Apr 30 03:30:13.121514 systemd[1]: Initializing machine ID from random generator. Apr 30 03:30:13.121530 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:30:13.121546 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:30:13.121561 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:30:13.121579 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:30:13.121596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:30:13.121612 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:30:13.121632 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:30:13.121651 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:30:13.121666 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:30:13.121683 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:30:13.121699 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:30:13.121715 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:30:13.121730 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:30:13.121750 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:30:13.121765 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:30:13.121782 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:30:13.121798 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:30:13.121815 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:30:13.121832 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:30:13.121848 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:30:13.121865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:30:13.121884 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:30:13.121900 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:30:13.121916 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:30:13.121932 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:30:13.121948 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:30:13.121964 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:30:13.121980 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:30:13.121996 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:30:13.122012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:13.122112 systemd-journald[176]: Collecting audit messages is disabled. Apr 30 03:30:13.122148 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:30:13.122162 systemd-journald[176]: Journal started Apr 30 03:30:13.122201 systemd-journald[176]: Runtime Journal (/run/log/journal/3969e93f0e3d496a9c4809d8f4d3585e) is 8.0M, max 158.8M, 150.8M free. Apr 30 03:30:13.095475 systemd-modules-load[177]: Inserted module 'overlay' Apr 30 03:30:13.134090 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:30:13.138058 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:30:13.145620 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:30:13.146939 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:30:13.155204 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:13.162846 kernel: Bridge firewalling registered Apr 30 03:30:13.156809 systemd-modules-load[177]: Inserted module 'br_netfilter' Apr 30 03:30:13.160227 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:30:13.176252 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:30:13.184256 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:30:13.194267 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:30:13.201251 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:30:13.210226 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:30:13.228442 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:30:13.234137 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:30:13.247859 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:30:13.251221 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:30:13.267038 dracut-cmdline[202]: dracut-dracut-053 Apr 30 03:30:13.259159 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:30:13.272223 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:30:13.289084 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:30:13.304704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:30:13.338245 systemd-resolved[214]: Positive Trust Anchors: Apr 30 03:30:13.338264 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:30:13.338314 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:30:13.366294 systemd-resolved[214]: Defaulting to hostname 'linux'. Apr 30 03:30:13.367556 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:30:13.376484 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:30:13.386090 kernel: SCSI subsystem initialized Apr 30 03:30:13.397087 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:30:13.408092 kernel: iscsi: registered transport (tcp) Apr 30 03:30:13.430247 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:30:13.430359 kernel: QLogic iSCSI HBA Driver Apr 30 03:30:13.466544 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:30:13.476243 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:30:13.503265 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:30:13.503355 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:30:13.506622 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:30:13.549113 kernel: raid6: avx512x4 gen() 17926 MB/s Apr 30 03:30:13.571099 kernel: raid6: avx512x2 gen() 17849 MB/s Apr 30 03:30:13.590081 kernel: raid6: avx512x1 gen() 17531 MB/s Apr 30 03:30:13.610086 kernel: raid6: avx2x4 gen() 18147 MB/s Apr 30 03:30:13.629083 kernel: raid6: avx2x2 gen() 17983 MB/s Apr 30 03:30:13.649884 kernel: raid6: avx2x1 gen() 13580 MB/s Apr 30 03:30:13.649943 kernel: raid6: using algorithm avx2x4 gen() 18147 MB/s Apr 30 03:30:13.672767 kernel: raid6: .... xor() 6194 MB/s, rmw enabled Apr 30 03:30:13.672814 kernel: raid6: using avx512x2 recovery algorithm Apr 30 03:30:13.695098 kernel: xor: automatically using best checksumming function avx Apr 30 03:30:13.844098 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:30:13.853920 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:30:13.863251 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:30:13.876945 systemd-udevd[394]: Using default interface naming scheme 'v255'. Apr 30 03:30:13.881558 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:30:13.894320 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:30:13.911515 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Apr 30 03:30:13.938407 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:30:13.950354 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:30:13.992295 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:30:14.009313 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:30:14.040374 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:30:14.047218 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:30:14.054139 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:30:14.060485 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:30:14.071344 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:30:14.080359 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:30:14.098898 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:30:14.099087 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:30:14.118980 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:30:14.119007 kernel: AES CTR mode by8 optimization enabled Apr 30 03:30:14.104147 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:30:14.108294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:30:14.108494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:14.126208 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:14.143012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:14.155117 kernel: hv_vmbus: Vmbus version:5.2 Apr 30 03:30:14.156627 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:30:14.173135 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 03:30:14.173208 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 03:30:14.182085 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:30:14.182627 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:30:14.182756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:14.208803 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 03:30:14.208838 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Apr 30 03:30:14.208858 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 03:30:14.208686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:14.232967 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:14.242088 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 03:30:14.250092 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Apr 30 03:30:14.254500 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:30:14.271002 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 03:30:14.271063 kernel: PTP clock support registered Apr 30 03:30:14.281087 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 03:30:14.289088 kernel: scsi host1: storvsc_host_t Apr 30 03:30:14.289308 kernel: scsi host0: storvsc_host_t Apr 30 03:30:14.295105 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 03:30:14.299401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:30:14.311357 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 03:30:14.311419 kernel: hv_vmbus: registering driver hv_utils Apr 30 03:30:14.311433 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 03:30:14.316225 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 03:30:14.318319 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 03:30:14.318359 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 03:30:14.427611 systemd-resolved[214]: Clock change detected. Flushing caches. Apr 30 03:30:14.449070 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 03:30:14.452024 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:30:14.452050 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 03:30:14.460028 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 03:30:14.472868 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 03:30:14.473070 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:30:14.473247 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 03:30:14.473421 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 03:30:14.473629 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:30:14.473651 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:30:14.493498 kernel: hv_netvsc 6045bddd-9464-6045-bddd-94646045bddd eth0: VF slot 1 added Apr 30 03:30:14.501486 kernel: hv_vmbus: registering driver hv_pci Apr 30 03:30:14.508649 kernel: hv_pci c133be3a-556d-487c-b54a-15c2ac58750e: PCI VMBus probing: Using version 0x10004 Apr 30 03:30:14.549660 kernel: hv_pci c133be3a-556d-487c-b54a-15c2ac58750e: PCI host bridge to bus 556d:00 Apr 30 03:30:14.549875 kernel: pci_bus 556d:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 30 03:30:14.550063 kernel: pci_bus 556d:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 03:30:14.550212 kernel: pci 556d:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 30 03:30:14.550405 kernel: pci 556d:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:30:14.550615 kernel: pci 556d:00:02.0: enabling Extended Tags Apr 30 03:30:14.550785 kernel: pci 556d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 556d:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 30 03:30:14.550954 kernel: pci_bus 556d:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 03:30:14.551110 kernel: pci 556d:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:30:14.717153 kernel: mlx5_core 556d:00:02.0: enabling device (0000 -> 0002) Apr 30 03:30:14.978789 kernel: mlx5_core 556d:00:02.0: firmware version: 14.30.5000 Apr 30 03:30:14.979005 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (451) Apr 30 03:30:14.979027 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (446) Apr 30 03:30:14.979046 kernel: hv_netvsc 6045bddd-9464-6045-bddd-94646045bddd eth0: VF registering: eth1 Apr 30 03:30:14.979200 kernel: mlx5_core 556d:00:02.0 eth1: joined to eth0 Apr 30 03:30:14.979376 kernel: mlx5_core 556d:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 03:30:14.869714 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 03:30:14.923238 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 03:30:14.988906 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:30:14.942990 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 03:30:14.951683 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 03:30:14.955405 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 03:30:14.972754 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:30:15.005483 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:30:15.018507 kernel: mlx5_core 556d:00:02.0 enP21869s1: renamed from eth1 Apr 30 03:30:16.003494 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:30:16.004061 disk-uuid[600]: The operation has completed successfully. Apr 30 03:30:16.085319 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:30:16.085432 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:30:16.106599 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:30:16.112757 sh[687]: Success Apr 30 03:30:16.170771 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:30:16.325058 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:30:16.343661 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:30:16.349199 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:30:16.367373 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:30:16.367446 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:30:16.371092 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:30:16.374152 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:30:16.376777 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:30:16.600158 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:30:16.603797 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:30:16.612702 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:30:16.619652 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:30:16.644244 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:16.644312 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:30:16.644333 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:30:16.660496 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:30:16.676215 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:16.675798 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:30:16.686622 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:30:16.699664 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:30:16.710325 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:30:16.717434 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:30:16.740271 systemd-networkd[871]: lo: Link UP Apr 30 03:30:16.740281 systemd-networkd[871]: lo: Gained carrier Apr 30 03:30:16.742424 systemd-networkd[871]: Enumeration completed Apr 30 03:30:16.742612 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:30:16.744501 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:16.744505 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:30:16.747732 systemd[1]: Reached target network.target - Network. Apr 30 03:30:16.804497 kernel: mlx5_core 556d:00:02.0 enP21869s1: Link up Apr 30 03:30:16.837502 kernel: hv_netvsc 6045bddd-9464-6045-bddd-94646045bddd eth0: Data path switched to VF: enP21869s1 Apr 30 03:30:16.838629 systemd-networkd[871]: enP21869s1: Link UP Apr 30 03:30:16.838769 systemd-networkd[871]: eth0: Link UP Apr 30 03:30:16.839017 systemd-networkd[871]: eth0: Gained carrier Apr 30 03:30:16.839029 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:16.844766 systemd-networkd[871]: enP21869s1: Gained carrier Apr 30 03:30:16.871515 systemd-networkd[871]: eth0: DHCPv4 address 10.200.8.29/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:30:17.532759 ignition[856]: Ignition 2.19.0 Apr 30 03:30:17.532771 ignition[856]: Stage: fetch-offline Apr 30 03:30:17.532823 ignition[856]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:17.537154 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:30:17.532835 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:17.532957 ignition[856]: parsed url from cmdline: "" Apr 30 03:30:17.532962 ignition[856]: no config URL provided Apr 30 03:30:17.532969 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:30:17.532979 ignition[856]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:30:17.532988 ignition[856]: failed to fetch config: resource requires networking Apr 30 03:30:17.556586 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:30:17.533226 ignition[856]: Ignition finished successfully Apr 30 03:30:17.571110 ignition[879]: Ignition 2.19.0 Apr 30 03:30:17.571123 ignition[879]: Stage: fetch Apr 30 03:30:17.571359 ignition[879]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:17.571373 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:17.571517 ignition[879]: parsed url from cmdline: "" Apr 30 03:30:17.571522 ignition[879]: no config URL provided Apr 30 03:30:17.571528 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:30:17.571535 ignition[879]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:30:17.571554 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 03:30:17.667611 ignition[879]: GET result: OK Apr 30 03:30:17.667710 ignition[879]: config has been read from IMDS userdata Apr 30 03:30:17.671925 unknown[879]: fetched base config from "system" Apr 30 03:30:17.667739 ignition[879]: parsing config with SHA512: 1b80217a71701b1ce7a676b66b12fdd126192bc01e5fc7093b4cc7de62902be4a64da042154d653f23b481817f79142835b506592a2747c914af143e7934abb6 Apr 30 03:30:17.671931 unknown[879]: fetched base config from "system" Apr 30 03:30:17.672486 ignition[879]: fetch: fetch complete Apr 30 03:30:17.671936 unknown[879]: fetched user config from "azure" Apr 30 03:30:17.672494 ignition[879]: fetch: fetch passed Apr 30 03:30:17.674665 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:30:17.672548 ignition[879]: Ignition finished successfully Apr 30 03:30:17.691552 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:30:17.707035 ignition[885]: Ignition 2.19.0 Apr 30 03:30:17.707047 ignition[885]: Stage: kargs Apr 30 03:30:17.707306 ignition[885]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:17.707321 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:17.708231 ignition[885]: kargs: kargs passed Apr 30 03:30:17.708274 ignition[885]: Ignition finished successfully Apr 30 03:30:17.717210 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:30:17.730637 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:30:17.747195 ignition[891]: Ignition 2.19.0 Apr 30 03:30:17.747205 ignition[891]: Stage: disks Apr 30 03:30:17.749174 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:30:17.747422 ignition[891]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:17.752940 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:30:17.747435 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:17.756873 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:30:17.748306 ignition[891]: disks: disks passed Apr 30 03:30:17.760207 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:30:17.748351 ignition[891]: Ignition finished successfully Apr 30 03:30:17.765807 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:30:17.770352 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:30:17.781676 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:30:17.841826 systemd-fsck[900]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 03:30:17.845888 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:30:17.857573 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:30:17.952542 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:30:17.953190 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:30:17.956395 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:30:17.990664 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:30:17.995901 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:30:18.004504 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:30:18.021042 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (911) Apr 30 03:30:18.021076 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:18.011474 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:30:18.034404 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:30:18.034436 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:30:18.034471 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:30:18.011519 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:30:18.011670 systemd-networkd[871]: enP21869s1: Gained IPv6LL Apr 30 03:30:18.017044 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:30:18.040016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:30:18.048660 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:30:18.459613 coreos-metadata[913]: Apr 30 03:30:18.459 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 03:30:18.466104 coreos-metadata[913]: Apr 30 03:30:18.466 INFO Fetch successful Apr 30 03:30:18.470395 coreos-metadata[913]: Apr 30 03:30:18.466 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 03:30:18.477132 coreos-metadata[913]: Apr 30 03:30:18.477 INFO Fetch successful Apr 30 03:30:18.489490 coreos-metadata[913]: Apr 30 03:30:18.488 INFO wrote hostname ci-4081.3.3-a-6f0285bad0 to /sysroot/etc/hostname Apr 30 03:30:18.494513 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:30:18.497506 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:30:18.519586 systemd-networkd[871]: eth0: Gained IPv6LL Apr 30 03:30:18.540831 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:30:18.546238 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:30:18.551315 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:30:19.201349 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:30:19.214650 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:30:19.222939 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:30:19.233290 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:19.232998 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:30:19.257039 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:30:19.268410 ignition[1034]: INFO : Ignition 2.19.0 Apr 30 03:30:19.268410 ignition[1034]: INFO : Stage: mount Apr 30 03:30:19.275401 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:19.275401 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:19.275401 ignition[1034]: INFO : mount: mount passed Apr 30 03:30:19.275401 ignition[1034]: INFO : Ignition finished successfully Apr 30 03:30:19.270515 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:30:19.287649 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:30:19.295925 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:30:19.311487 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1045) Apr 30 03:30:19.315477 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:19.315517 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:30:19.320497 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:30:19.325485 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:30:19.327267 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:30:19.354781 ignition[1061]: INFO : Ignition 2.19.0 Apr 30 03:30:19.357433 ignition[1061]: INFO : Stage: files Apr 30 03:30:19.357433 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:19.357433 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:19.366261 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:30:19.380895 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:30:19.384642 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:30:19.457367 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:30:19.462393 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:30:19.462393 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:30:19.457911 unknown[1061]: wrote ssh authorized keys file for user: core Apr 30 03:30:19.473005 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:30:19.478158 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:30:19.482914 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:30:19.482914 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:30:19.533877 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 03:30:19.769996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:30:19.769996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:30:20.172401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 03:30:20.469642 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:30:20.469642 ignition[1061]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 30 03:30:20.517866 ignition[1061]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:30:20.525265 ignition[1061]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:30:20.525265 ignition[1061]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 30 03:30:20.525265 ignition[1061]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 30 03:30:20.538494 ignition[1061]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:30:20.538494 ignition[1061]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:30:20.538494 ignition[1061]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 30 03:30:20.538494 ignition[1061]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:30:20.538494 ignition[1061]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:30:20.556545 ignition[1061]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:30:20.560701 ignition[1061]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:30:20.564595 ignition[1061]: INFO : files: files passed Apr 30 03:30:20.564595 ignition[1061]: INFO : Ignition finished successfully Apr 30 03:30:20.562677 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:30:20.573642 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:30:20.586614 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:30:20.591698 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:30:20.591799 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:30:20.606648 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:30:20.606648 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:30:20.618801 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:30:20.612209 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:30:20.619783 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:30:20.643660 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:30:20.675912 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:30:20.676037 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:30:20.682476 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:30:20.688363 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:30:20.691160 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:30:20.700697 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:30:20.715141 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:30:20.724654 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:30:20.735098 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:30:20.736510 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:30:20.736921 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:30:20.737317 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:30:20.737429 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:30:20.738203 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:30:20.739115 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:30:20.739576 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:30:20.740024 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:30:20.740471 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:30:20.740924 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:30:20.741359 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:30:20.741888 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:30:20.742317 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:30:20.742851 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:30:20.743270 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:30:20.743399 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:30:20.744167 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:30:20.744792 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:30:20.745186 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:30:20.784752 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:30:20.791498 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:30:20.791673 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:30:20.802374 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:30:20.809895 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:30:20.829308 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:30:20.831317 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:30:20.836543 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:30:20.836679 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:30:20.872781 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:30:20.875173 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:30:20.875395 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:30:20.894435 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:30:20.899214 ignition[1115]: INFO : Ignition 2.19.0 Apr 30 03:30:20.907666 ignition[1115]: INFO : Stage: umount Apr 30 03:30:20.907666 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:20.907666 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:20.907666 ignition[1115]: INFO : umount: umount passed Apr 30 03:30:20.907666 ignition[1115]: INFO : Ignition finished successfully Apr 30 03:30:20.903977 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:30:20.904181 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:30:20.904917 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:30:20.905045 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:30:20.927290 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:30:20.927445 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:30:20.935665 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:30:20.935796 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:30:20.942350 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:30:20.942446 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:30:20.947150 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:30:20.947202 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:30:20.950065 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:30:20.950109 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:30:20.955071 systemd[1]: Stopped target network.target - Network. Apr 30 03:30:20.959585 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:30:20.959651 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:30:20.962759 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:30:20.963183 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:30:20.967558 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:30:20.973954 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:30:20.978907 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:30:20.983251 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:30:20.983292 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:30:20.991322 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:30:20.991374 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:30:20.996256 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:30:20.996322 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:30:21.001229 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:30:21.001281 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:30:21.004195 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:30:21.014719 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:30:21.017522 systemd-networkd[871]: eth0: DHCPv6 lease lost Apr 30 03:30:21.023977 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:30:21.028886 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:30:21.029004 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:30:21.035275 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:30:21.035383 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:30:21.039288 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:30:21.039371 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:30:21.058881 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:30:21.062130 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:30:21.064569 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:30:21.068418 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:30:21.068494 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:30:21.073476 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:30:21.073628 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:30:21.079154 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:30:21.079211 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:30:21.082271 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:30:21.100987 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:30:21.101182 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:30:21.103848 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:30:21.104087 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:30:21.109962 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:30:21.110003 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:30:21.110322 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:30:21.110363 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:30:21.185808 kernel: hv_netvsc 6045bddd-9464-6045-bddd-94646045bddd eth0: Data path switched from VF: enP21869s1 Apr 30 03:30:21.111141 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:30:21.111179 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:30:21.126818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:30:21.128995 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:30:21.137807 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:30:21.141559 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:30:21.141617 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:30:21.153313 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:30:21.153366 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:21.159126 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:30:21.159231 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:30:21.222179 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:30:21.222319 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:30:21.523298 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:30:21.523494 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:30:21.531741 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:30:21.534935 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:30:21.537729 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:30:21.552606 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:30:21.963359 systemd[1]: Switching root. Apr 30 03:30:21.996923 systemd-journald[176]: Journal stopped Apr 30 03:30:13.109221 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:30:13.109261 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:30:13.109277 kernel: BIOS-provided physical RAM map: Apr 30 03:30:13.109288 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:30:13.109299 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 30 03:30:13.109310 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Apr 30 03:30:13.109323 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Apr 30 03:30:13.109337 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Apr 30 03:30:13.109349 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 30 03:30:13.109361 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 30 03:30:13.109373 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 30 03:30:13.109384 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 30 03:30:13.109396 kernel: printk: bootconsole [earlyser0] enabled Apr 30 03:30:13.109407 kernel: NX (Execute Disable) protection: active Apr 30 03:30:13.109425 kernel: APIC: Static calls initialized Apr 30 03:30:13.109438 kernel: efi: EFI v2.7 by Microsoft Apr 30 03:30:13.109451 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Apr 30 03:30:13.109464 kernel: SMBIOS 3.1.0 present. Apr 30 03:30:13.109477 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Apr 30 03:30:13.109489 kernel: Hypervisor detected: Microsoft Hyper-V Apr 30 03:30:13.109502 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 30 03:30:13.109515 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Apr 30 03:30:13.109528 kernel: Hyper-V: Nested features: 0x1e0101 Apr 30 03:30:13.109540 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 30 03:30:13.109556 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 30 03:30:13.109569 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:30:13.109582 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:30:13.109596 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 30 03:30:13.109609 kernel: tsc: Detected 2593.906 MHz processor Apr 30 03:30:13.109622 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:30:13.109636 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:30:13.109649 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 30 03:30:13.109662 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:30:13.109677 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:30:13.109690 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 30 03:30:13.109703 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 30 03:30:13.109715 kernel: Using GB pages for direct mapping Apr 30 03:30:13.109729 kernel: Secure boot disabled Apr 30 03:30:13.109743 kernel: ACPI: Early table checksum verification disabled Apr 30 03:30:13.109756 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 30 03:30:13.109775 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109792 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109805 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Apr 30 03:30:13.109819 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 30 03:30:13.109834 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109848 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109862 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109879 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109893 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109907 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109921 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:30:13.109935 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 30 03:30:13.109949 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Apr 30 03:30:13.109963 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 30 03:30:13.109978 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 30 03:30:13.109994 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 30 03:30:13.110008 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 30 03:30:13.110023 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 30 03:30:13.110037 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Apr 30 03:30:13.110051 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 30 03:30:13.110065 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Apr 30 03:30:13.110392 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:30:13.110407 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:30:13.110421 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 30 03:30:13.110440 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 30 03:30:13.110454 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 30 03:30:13.110468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 30 03:30:13.110483 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 30 03:30:13.110497 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 30 03:30:13.110511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 30 03:30:13.110525 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 30 03:30:13.110540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 30 03:30:13.110555 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 30 03:30:13.110572 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Apr 30 03:30:13.110586 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Apr 30 03:30:13.110601 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Apr 30 03:30:13.110615 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Apr 30 03:30:13.110629 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Apr 30 03:30:13.110643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Apr 30 03:30:13.110657 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 30 03:30:13.110672 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 30 03:30:13.110685 kernel: Zone ranges: Apr 30 03:30:13.110702 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:30:13.110716 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 03:30:13.110730 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:30:13.110745 kernel: Movable zone start for each node Apr 30 03:30:13.110759 kernel: Early memory node ranges Apr 30 03:30:13.110773 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:30:13.110787 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Apr 30 03:30:13.110801 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 30 03:30:13.110816 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:30:13.110833 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 30 03:30:13.110847 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:30:13.110861 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:30:13.110875 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Apr 30 03:30:13.110889 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 30 03:30:13.110903 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 30 03:30:13.110917 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:30:13.110932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:30:13.110946 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:30:13.110963 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 30 03:30:13.110978 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:30:13.110991 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 30 03:30:13.111006 kernel: Booting paravirtualized kernel on Hyper-V Apr 30 03:30:13.111020 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:30:13.111034 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:30:13.111048 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:30:13.111062 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:30:13.111092 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:30:13.111109 kernel: Hyper-V: PV spinlocks enabled Apr 30 03:30:13.111123 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:30:13.111139 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:30:13.111154 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:30:13.111168 kernel: random: crng init done Apr 30 03:30:13.111182 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 30 03:30:13.111196 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:30:13.111210 kernel: Fallback order for Node 0: 0 Apr 30 03:30:13.111227 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Apr 30 03:30:13.111252 kernel: Policy zone: Normal Apr 30 03:30:13.111270 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:30:13.111285 kernel: software IO TLB: area num 2. Apr 30 03:30:13.111300 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 310124K reserved, 0K cma-reserved) Apr 30 03:30:13.111316 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:30:13.111331 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:30:13.111346 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:30:13.111361 kernel: Dynamic Preempt: voluntary Apr 30 03:30:13.111377 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:30:13.111393 kernel: rcu: RCU event tracing is enabled. Apr 30 03:30:13.111411 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:30:13.111426 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:30:13.111442 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:30:13.111457 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:30:13.111473 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:30:13.111491 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:30:13.111505 kernel: Using NULL legacy PIC Apr 30 03:30:13.111520 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 30 03:30:13.111535 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:30:13.111550 kernel: Console: colour dummy device 80x25 Apr 30 03:30:13.111566 kernel: printk: console [tty1] enabled Apr 30 03:30:13.111581 kernel: printk: console [ttyS0] enabled Apr 30 03:30:13.111596 kernel: printk: bootconsole [earlyser0] disabled Apr 30 03:30:13.111611 kernel: ACPI: Core revision 20230628 Apr 30 03:30:13.111627 kernel: Failed to register legacy timer interrupt Apr 30 03:30:13.111645 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:30:13.111660 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 03:30:13.111675 kernel: Hyper-V: Using IPI hypercalls Apr 30 03:30:13.111690 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 30 03:30:13.111706 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 30 03:30:13.111722 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 30 03:30:13.111738 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 30 03:30:13.111753 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 30 03:30:13.111768 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 30 03:30:13.111787 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Apr 30 03:30:13.111802 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:30:13.111817 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:30:13.111833 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:30:13.111848 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:30:13.111863 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:30:13.111877 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:30:13.111893 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 03:30:13.111908 kernel: RETBleed: Vulnerable Apr 30 03:30:13.111925 kernel: Speculative Store Bypass: Vulnerable Apr 30 03:30:13.111940 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:30:13.111955 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:30:13.111970 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:30:13.111985 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:30:13.112000 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:30:13.112015 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 03:30:13.112030 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 03:30:13.112046 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 03:30:13.112061 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:30:13.112084 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 30 03:30:13.118964 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 30 03:30:13.118988 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 30 03:30:13.119003 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 30 03:30:13.119018 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:30:13.119033 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:30:13.119048 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:30:13.119062 kernel: landlock: Up and running. Apr 30 03:30:13.119087 kernel: SELinux: Initializing. Apr 30 03:30:13.119102 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:30:13.119117 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:30:13.119132 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 03:30:13.119145 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:30:13.119167 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:30:13.119179 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:30:13.119194 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 03:30:13.119207 kernel: signal: max sigframe size: 3632 Apr 30 03:30:13.119218 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:30:13.119232 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:30:13.119245 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:30:13.119259 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:30:13.119273 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:30:13.119291 kernel: .... node #0, CPUs: #1 Apr 30 03:30:13.119306 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 30 03:30:13.119324 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:30:13.119344 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:30:13.119358 kernel: smpboot: Max logical packages: 1 Apr 30 03:30:13.119370 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 30 03:30:13.119382 kernel: devtmpfs: initialized Apr 30 03:30:13.119396 kernel: x86/mm: Memory block size: 128MB Apr 30 03:30:13.119412 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 30 03:30:13.119427 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:30:13.119442 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:30:13.119457 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:30:13.119472 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:30:13.119485 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:30:13.119498 kernel: audit: type=2000 audit(1745983811.028:1): state=initialized audit_enabled=0 res=1 Apr 30 03:30:13.119511 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:30:13.119523 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:30:13.119539 kernel: cpuidle: using governor menu Apr 30 03:30:13.119554 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:30:13.119568 kernel: dca service started, version 1.12.1 Apr 30 03:30:13.119582 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Apr 30 03:30:13.119602 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:30:13.119614 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:30:13.119627 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:30:13.119640 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:30:13.119654 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:30:13.119672 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:30:13.119687 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:30:13.119702 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:30:13.119715 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:30:13.119734 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:30:13.119748 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:30:13.119762 kernel: ACPI: Interpreter enabled Apr 30 03:30:13.119775 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:30:13.119788 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:30:13.119813 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:30:13.119829 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 03:30:13.119841 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 30 03:30:13.119854 kernel: iommu: Default domain type: Translated Apr 30 03:30:13.119868 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:30:13.119882 kernel: efivars: Registered efivars operations Apr 30 03:30:13.119897 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:30:13.119912 kernel: PCI: System does not support PCI Apr 30 03:30:13.119926 kernel: vgaarb: loaded Apr 30 03:30:13.119944 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 30 03:30:13.119959 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:30:13.119975 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:30:13.119990 kernel: pnp: PnP ACPI init Apr 30 03:30:13.120006 kernel: pnp: PnP ACPI: found 3 devices Apr 30 03:30:13.120021 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:30:13.120035 kernel: NET: Registered PF_INET protocol family Apr 30 03:30:13.120050 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:30:13.120063 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 30 03:30:13.120093 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:30:13.120106 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:30:13.120122 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 03:30:13.120136 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 30 03:30:13.120151 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:30:13.120165 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:30:13.120179 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:30:13.120194 kernel: NET: Registered PF_XDP protocol family Apr 30 03:30:13.120207 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:30:13.120224 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 03:30:13.120239 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Apr 30 03:30:13.120253 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:30:13.120267 kernel: Initialise system trusted keyrings Apr 30 03:30:13.120281 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 30 03:30:13.120295 kernel: Key type asymmetric registered Apr 30 03:30:13.120309 kernel: Asymmetric key parser 'x509' registered Apr 30 03:30:13.120322 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:30:13.120337 kernel: io scheduler mq-deadline registered Apr 30 03:30:13.120354 kernel: io scheduler kyber registered Apr 30 03:30:13.120368 kernel: io scheduler bfq registered Apr 30 03:30:13.120382 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:30:13.120397 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:30:13.120410 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:30:13.120425 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 03:30:13.120440 kernel: i8042: PNP: No PS/2 controller found. Apr 30 03:30:13.120629 kernel: rtc_cmos 00:02: registered as rtc0 Apr 30 03:30:13.120751 kernel: rtc_cmos 00:02: setting system clock to 2025-04-30T03:30:12 UTC (1745983812) Apr 30 03:30:13.120870 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 30 03:30:13.120889 kernel: intel_pstate: CPU model not supported Apr 30 03:30:13.120908 kernel: efifb: probing for efifb Apr 30 03:30:13.120933 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 03:30:13.120948 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 03:30:13.120960 kernel: efifb: scrolling: redraw Apr 30 03:30:13.120974 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 03:30:13.120996 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:30:13.121009 kernel: fb0: EFI VGA frame buffer device Apr 30 03:30:13.121022 kernel: pstore: Using crash dump compression: deflate Apr 30 03:30:13.121034 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:30:13.121047 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:30:13.121059 kernel: Segment Routing with IPv6 Apr 30 03:30:13.121092 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:30:13.121104 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:30:13.121116 kernel: Key type dns_resolver registered Apr 30 03:30:13.121134 kernel: IPI shorthand broadcast: enabled Apr 30 03:30:13.121146 kernel: sched_clock: Marking stable (958006200, 47968700)->(1230181100, -224206200) Apr 30 03:30:13.121159 kernel: registered taskstats version 1 Apr 30 03:30:13.121171 kernel: Loading compiled-in X.509 certificates Apr 30 03:30:13.121186 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:30:13.121200 kernel: Key type .fscrypt registered Apr 30 03:30:13.121213 kernel: Key type fscrypt-provisioning registered Apr 30 03:30:13.121228 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:30:13.121241 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:30:13.121258 kernel: ima: No architecture policies found Apr 30 03:30:13.121270 kernel: clk: Disabling unused clocks Apr 30 03:30:13.121282 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:30:13.121294 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:30:13.121308 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:30:13.121322 kernel: Run /init as init process Apr 30 03:30:13.121334 kernel: with arguments: Apr 30 03:30:13.121346 kernel: /init Apr 30 03:30:13.121360 kernel: with environment: Apr 30 03:30:13.121376 kernel: HOME=/ Apr 30 03:30:13.121389 kernel: TERM=linux Apr 30 03:30:13.121403 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:30:13.121421 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:30:13.121439 systemd[1]: Detected virtualization microsoft. Apr 30 03:30:13.121453 systemd[1]: Detected architecture x86-64. Apr 30 03:30:13.121467 systemd[1]: Running in initrd. Apr 30 03:30:13.121480 systemd[1]: No hostname configured, using default hostname. Apr 30 03:30:13.121498 systemd[1]: Hostname set to . Apr 30 03:30:13.121514 systemd[1]: Initializing machine ID from random generator. Apr 30 03:30:13.121530 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:30:13.121546 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:30:13.121561 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:30:13.121579 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:30:13.121596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:30:13.121612 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:30:13.121632 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:30:13.121651 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:30:13.121666 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:30:13.121683 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:30:13.121699 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:30:13.121715 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:30:13.121730 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:30:13.121750 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:30:13.121765 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:30:13.121782 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:30:13.121798 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:30:13.121815 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:30:13.121832 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:30:13.121848 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:30:13.121865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:30:13.121884 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:30:13.121900 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:30:13.121916 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:30:13.121932 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:30:13.121948 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:30:13.121964 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:30:13.121980 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:30:13.121996 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:30:13.122012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:13.122112 systemd-journald[176]: Collecting audit messages is disabled. Apr 30 03:30:13.122148 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:30:13.122162 systemd-journald[176]: Journal started Apr 30 03:30:13.122201 systemd-journald[176]: Runtime Journal (/run/log/journal/3969e93f0e3d496a9c4809d8f4d3585e) is 8.0M, max 158.8M, 150.8M free. Apr 30 03:30:13.095475 systemd-modules-load[177]: Inserted module 'overlay' Apr 30 03:30:13.134090 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:30:13.138058 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:30:13.145620 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:30:13.146939 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:30:13.155204 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:13.162846 kernel: Bridge firewalling registered Apr 30 03:30:13.156809 systemd-modules-load[177]: Inserted module 'br_netfilter' Apr 30 03:30:13.160227 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:30:13.176252 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:30:13.184256 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:30:13.194267 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:30:13.201251 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:30:13.210226 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:30:13.228442 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:30:13.234137 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:30:13.247859 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:30:13.251221 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:30:13.267038 dracut-cmdline[202]: dracut-dracut-053 Apr 30 03:30:13.259159 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:30:13.272223 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:30:13.289084 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:30:13.304704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:30:13.338245 systemd-resolved[214]: Positive Trust Anchors: Apr 30 03:30:13.338264 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:30:13.338314 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:30:13.366294 systemd-resolved[214]: Defaulting to hostname 'linux'. Apr 30 03:30:13.367556 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:30:13.376484 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:30:13.386090 kernel: SCSI subsystem initialized Apr 30 03:30:13.397087 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:30:13.408092 kernel: iscsi: registered transport (tcp) Apr 30 03:30:13.430247 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:30:13.430359 kernel: QLogic iSCSI HBA Driver Apr 30 03:30:13.466544 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:30:13.476243 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:30:13.503265 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:30:13.503355 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:30:13.506622 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:30:13.549113 kernel: raid6: avx512x4 gen() 17926 MB/s Apr 30 03:30:13.571099 kernel: raid6: avx512x2 gen() 17849 MB/s Apr 30 03:30:13.590081 kernel: raid6: avx512x1 gen() 17531 MB/s Apr 30 03:30:13.610086 kernel: raid6: avx2x4 gen() 18147 MB/s Apr 30 03:30:13.629083 kernel: raid6: avx2x2 gen() 17983 MB/s Apr 30 03:30:13.649884 kernel: raid6: avx2x1 gen() 13580 MB/s Apr 30 03:30:13.649943 kernel: raid6: using algorithm avx2x4 gen() 18147 MB/s Apr 30 03:30:13.672767 kernel: raid6: .... xor() 6194 MB/s, rmw enabled Apr 30 03:30:13.672814 kernel: raid6: using avx512x2 recovery algorithm Apr 30 03:30:13.695098 kernel: xor: automatically using best checksumming function avx Apr 30 03:30:13.844098 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:30:13.853920 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:30:13.863251 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:30:13.876945 systemd-udevd[394]: Using default interface naming scheme 'v255'. Apr 30 03:30:13.881558 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:30:13.894320 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:30:13.911515 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Apr 30 03:30:13.938407 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:30:13.950354 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:30:13.992295 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:30:14.009313 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:30:14.040374 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:30:14.047218 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:30:14.054139 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:30:14.060485 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:30:14.071344 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:30:14.080359 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:30:14.098898 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:30:14.099087 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:30:14.118980 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:30:14.119007 kernel: AES CTR mode by8 optimization enabled Apr 30 03:30:14.104147 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:30:14.108294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:30:14.108494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:14.126208 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:14.143012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:14.155117 kernel: hv_vmbus: Vmbus version:5.2 Apr 30 03:30:14.156627 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:30:14.173135 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 03:30:14.173208 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 03:30:14.182085 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:30:14.182627 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:30:14.182756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:14.208803 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 03:30:14.208838 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Apr 30 03:30:14.208858 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 03:30:14.208686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:14.232967 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:14.242088 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 03:30:14.250092 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Apr 30 03:30:14.254500 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:30:14.271002 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 03:30:14.271063 kernel: PTP clock support registered Apr 30 03:30:14.281087 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 03:30:14.289088 kernel: scsi host1: storvsc_host_t Apr 30 03:30:14.289308 kernel: scsi host0: storvsc_host_t Apr 30 03:30:14.295105 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 03:30:14.299401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:30:14.311357 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 03:30:14.311419 kernel: hv_vmbus: registering driver hv_utils Apr 30 03:30:14.311433 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 03:30:14.316225 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 03:30:14.318319 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 03:30:14.318359 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 03:30:14.427611 systemd-resolved[214]: Clock change detected. Flushing caches. Apr 30 03:30:14.449070 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 03:30:14.452024 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:30:14.452050 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 03:30:14.460028 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 03:30:14.472868 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 03:30:14.473070 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:30:14.473247 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 03:30:14.473421 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 03:30:14.473629 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:30:14.473651 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:30:14.493498 kernel: hv_netvsc 6045bddd-9464-6045-bddd-94646045bddd eth0: VF slot 1 added Apr 30 03:30:14.501486 kernel: hv_vmbus: registering driver hv_pci Apr 30 03:30:14.508649 kernel: hv_pci c133be3a-556d-487c-b54a-15c2ac58750e: PCI VMBus probing: Using version 0x10004 Apr 30 03:30:14.549660 kernel: hv_pci c133be3a-556d-487c-b54a-15c2ac58750e: PCI host bridge to bus 556d:00 Apr 30 03:30:14.549875 kernel: pci_bus 556d:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 30 03:30:14.550063 kernel: pci_bus 556d:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 03:30:14.550212 kernel: pci 556d:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 30 03:30:14.550405 kernel: pci 556d:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:30:14.550615 kernel: pci 556d:00:02.0: enabling Extended Tags Apr 30 03:30:14.550785 kernel: pci 556d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 556d:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 30 03:30:14.550954 kernel: pci_bus 556d:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 03:30:14.551110 kernel: pci 556d:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:30:14.717153 kernel: mlx5_core 556d:00:02.0: enabling device (0000 -> 0002) Apr 30 03:30:14.978789 kernel: mlx5_core 556d:00:02.0: firmware version: 14.30.5000 Apr 30 03:30:14.979005 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (451) Apr 30 03:30:14.979027 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (446) Apr 30 03:30:14.979046 kernel: hv_netvsc 6045bddd-9464-6045-bddd-94646045bddd eth0: VF registering: eth1 Apr 30 03:30:14.979200 kernel: mlx5_core 556d:00:02.0 eth1: joined to eth0 Apr 30 03:30:14.979376 kernel: mlx5_core 556d:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 03:30:14.869714 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 03:30:14.923238 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 03:30:14.988906 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:30:14.942990 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 03:30:14.951683 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 03:30:14.955405 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 03:30:14.972754 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:30:15.005483 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:30:15.018507 kernel: mlx5_core 556d:00:02.0 enP21869s1: renamed from eth1 Apr 30 03:30:16.003494 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:30:16.004061 disk-uuid[600]: The operation has completed successfully. Apr 30 03:30:16.085319 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:30:16.085432 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:30:16.106599 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:30:16.112757 sh[687]: Success Apr 30 03:30:16.170771 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:30:16.325058 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:30:16.343661 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:30:16.349199 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:30:16.367373 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:30:16.367446 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:30:16.371092 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:30:16.374152 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:30:16.376777 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:30:16.600158 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:30:16.603797 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:30:16.612702 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:30:16.619652 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:30:16.644244 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:16.644312 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:30:16.644333 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:30:16.660496 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:30:16.676215 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:16.675798 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:30:16.686622 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:30:16.699664 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:30:16.710325 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:30:16.717434 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:30:16.740271 systemd-networkd[871]: lo: Link UP Apr 30 03:30:16.740281 systemd-networkd[871]: lo: Gained carrier Apr 30 03:30:16.742424 systemd-networkd[871]: Enumeration completed Apr 30 03:30:16.742612 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:30:16.744501 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:16.744505 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:30:16.747732 systemd[1]: Reached target network.target - Network. Apr 30 03:30:16.804497 kernel: mlx5_core 556d:00:02.0 enP21869s1: Link up Apr 30 03:30:16.837502 kernel: hv_netvsc 6045bddd-9464-6045-bddd-94646045bddd eth0: Data path switched to VF: enP21869s1 Apr 30 03:30:16.838629 systemd-networkd[871]: enP21869s1: Link UP Apr 30 03:30:16.838769 systemd-networkd[871]: eth0: Link UP Apr 30 03:30:16.839017 systemd-networkd[871]: eth0: Gained carrier Apr 30 03:30:16.839029 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:16.844766 systemd-networkd[871]: enP21869s1: Gained carrier Apr 30 03:30:16.871515 systemd-networkd[871]: eth0: DHCPv4 address 10.200.8.29/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:30:17.532759 ignition[856]: Ignition 2.19.0 Apr 30 03:30:17.532771 ignition[856]: Stage: fetch-offline Apr 30 03:30:17.532823 ignition[856]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:17.537154 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:30:17.532835 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:17.532957 ignition[856]: parsed url from cmdline: "" Apr 30 03:30:17.532962 ignition[856]: no config URL provided Apr 30 03:30:17.532969 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:30:17.532979 ignition[856]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:30:17.532988 ignition[856]: failed to fetch config: resource requires networking Apr 30 03:30:17.556586 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:30:17.533226 ignition[856]: Ignition finished successfully Apr 30 03:30:17.571110 ignition[879]: Ignition 2.19.0 Apr 30 03:30:17.571123 ignition[879]: Stage: fetch Apr 30 03:30:17.571359 ignition[879]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:17.571373 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:17.571517 ignition[879]: parsed url from cmdline: "" Apr 30 03:30:17.571522 ignition[879]: no config URL provided Apr 30 03:30:17.571528 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:30:17.571535 ignition[879]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:30:17.571554 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 03:30:17.667611 ignition[879]: GET result: OK Apr 30 03:30:17.667710 ignition[879]: config has been read from IMDS userdata Apr 30 03:30:17.671925 unknown[879]: fetched base config from "system" Apr 30 03:30:17.667739 ignition[879]: parsing config with SHA512: 1b80217a71701b1ce7a676b66b12fdd126192bc01e5fc7093b4cc7de62902be4a64da042154d653f23b481817f79142835b506592a2747c914af143e7934abb6 Apr 30 03:30:17.671931 unknown[879]: fetched base config from "system" Apr 30 03:30:17.672486 ignition[879]: fetch: fetch complete Apr 30 03:30:17.671936 unknown[879]: fetched user config from "azure" Apr 30 03:30:17.672494 ignition[879]: fetch: fetch passed Apr 30 03:30:17.674665 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:30:17.672548 ignition[879]: Ignition finished successfully Apr 30 03:30:17.691552 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:30:17.707035 ignition[885]: Ignition 2.19.0 Apr 30 03:30:17.707047 ignition[885]: Stage: kargs Apr 30 03:30:17.707306 ignition[885]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:17.707321 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:17.708231 ignition[885]: kargs: kargs passed Apr 30 03:30:17.708274 ignition[885]: Ignition finished successfully Apr 30 03:30:17.717210 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:30:17.730637 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:30:17.747195 ignition[891]: Ignition 2.19.0 Apr 30 03:30:17.747205 ignition[891]: Stage: disks Apr 30 03:30:17.749174 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:30:17.747422 ignition[891]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:17.752940 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:30:17.747435 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:17.756873 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:30:17.748306 ignition[891]: disks: disks passed Apr 30 03:30:17.760207 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:30:17.748351 ignition[891]: Ignition finished successfully Apr 30 03:30:17.765807 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:30:17.770352 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:30:17.781676 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:30:17.841826 systemd-fsck[900]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 03:30:17.845888 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:30:17.857573 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:30:17.952542 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:30:17.953190 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:30:17.956395 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:30:17.990664 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:30:17.995901 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:30:18.004504 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:30:18.021042 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (911) Apr 30 03:30:18.021076 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:18.011474 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:30:18.034404 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:30:18.034436 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:30:18.034471 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:30:18.011519 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:30:18.011670 systemd-networkd[871]: enP21869s1: Gained IPv6LL Apr 30 03:30:18.017044 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:30:18.040016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:30:18.048660 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:30:18.459613 coreos-metadata[913]: Apr 30 03:30:18.459 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 03:30:18.466104 coreos-metadata[913]: Apr 30 03:30:18.466 INFO Fetch successful Apr 30 03:30:18.470395 coreos-metadata[913]: Apr 30 03:30:18.466 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 03:30:18.477132 coreos-metadata[913]: Apr 30 03:30:18.477 INFO Fetch successful Apr 30 03:30:18.489490 coreos-metadata[913]: Apr 30 03:30:18.488 INFO wrote hostname ci-4081.3.3-a-6f0285bad0 to /sysroot/etc/hostname Apr 30 03:30:18.494513 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:30:18.497506 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:30:18.519586 systemd-networkd[871]: eth0: Gained IPv6LL Apr 30 03:30:18.540831 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:30:18.546238 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:30:18.551315 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:30:19.201349 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:30:19.214650 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:30:19.222939 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:30:19.233290 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:19.232998 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:30:19.257039 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:30:19.268410 ignition[1034]: INFO : Ignition 2.19.0 Apr 30 03:30:19.268410 ignition[1034]: INFO : Stage: mount Apr 30 03:30:19.275401 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:19.275401 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:19.275401 ignition[1034]: INFO : mount: mount passed Apr 30 03:30:19.275401 ignition[1034]: INFO : Ignition finished successfully Apr 30 03:30:19.270515 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:30:19.287649 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:30:19.295925 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:30:19.311487 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1045) Apr 30 03:30:19.315477 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:19.315517 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:30:19.320497 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:30:19.325485 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:30:19.327267 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:30:19.354781 ignition[1061]: INFO : Ignition 2.19.0 Apr 30 03:30:19.357433 ignition[1061]: INFO : Stage: files Apr 30 03:30:19.357433 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:19.357433 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:19.366261 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:30:19.380895 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:30:19.384642 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:30:19.457367 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:30:19.462393 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:30:19.462393 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:30:19.457911 unknown[1061]: wrote ssh authorized keys file for user: core Apr 30 03:30:19.473005 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:30:19.478158 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:30:19.482914 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:30:19.482914 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:30:19.533877 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 03:30:19.769996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:30:19.769996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:30:19.780150 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:30:20.172401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 03:30:20.469642 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:30:20.469642 ignition[1061]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 30 03:30:20.517866 ignition[1061]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:30:20.525265 ignition[1061]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:30:20.525265 ignition[1061]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 30 03:30:20.525265 ignition[1061]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 30 03:30:20.538494 ignition[1061]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:30:20.538494 ignition[1061]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:30:20.538494 ignition[1061]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 30 03:30:20.538494 ignition[1061]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:30:20.538494 ignition[1061]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:30:20.556545 ignition[1061]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:30:20.560701 ignition[1061]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:30:20.564595 ignition[1061]: INFO : files: files passed Apr 30 03:30:20.564595 ignition[1061]: INFO : Ignition finished successfully Apr 30 03:30:20.562677 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:30:20.573642 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:30:20.586614 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:30:20.591698 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:30:20.591799 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:30:20.606648 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:30:20.606648 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:30:20.618801 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:30:20.612209 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:30:20.619783 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:30:20.643660 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:30:20.675912 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:30:20.676037 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:30:20.682476 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:30:20.688363 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:30:20.691160 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:30:20.700697 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:30:20.715141 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:30:20.724654 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:30:20.735098 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:30:20.736510 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:30:20.736921 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:30:20.737317 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:30:20.737429 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:30:20.738203 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:30:20.739115 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:30:20.739576 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:30:20.740024 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:30:20.740471 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:30:20.740924 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:30:20.741359 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:30:20.741888 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:30:20.742317 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:30:20.742851 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:30:20.743270 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:30:20.743399 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:30:20.744167 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:30:20.744792 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:30:20.745186 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:30:20.784752 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:30:20.791498 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:30:20.791673 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:30:20.802374 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:30:20.809895 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:30:20.829308 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:30:20.831317 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:30:20.836543 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:30:20.836679 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:30:20.872781 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:30:20.875173 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:30:20.875395 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:30:20.894435 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:30:20.899214 ignition[1115]: INFO : Ignition 2.19.0 Apr 30 03:30:20.907666 ignition[1115]: INFO : Stage: umount Apr 30 03:30:20.907666 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:20.907666 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:30:20.907666 ignition[1115]: INFO : umount: umount passed Apr 30 03:30:20.907666 ignition[1115]: INFO : Ignition finished successfully Apr 30 03:30:20.903977 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:30:20.904181 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:30:20.904917 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:30:20.905045 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:30:20.927290 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:30:20.927445 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:30:20.935665 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:30:20.935796 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:30:20.942350 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:30:20.942446 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:30:20.947150 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:30:20.947202 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:30:20.950065 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:30:20.950109 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:30:20.955071 systemd[1]: Stopped target network.target - Network. Apr 30 03:30:20.959585 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:30:20.959651 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:30:20.962759 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:30:20.963183 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:30:20.967558 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:30:20.973954 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:30:20.978907 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:30:20.983251 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:30:20.983292 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:30:20.991322 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:30:20.991374 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:30:20.996256 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:30:20.996322 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:30:21.001229 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:30:21.001281 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:30:21.004195 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:30:21.014719 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:30:21.017522 systemd-networkd[871]: eth0: DHCPv6 lease lost Apr 30 03:30:21.023977 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:30:21.028886 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:30:21.029004 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:30:21.035275 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:30:21.035383 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:30:21.039288 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:30:21.039371 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:30:21.058881 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:30:21.062130 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:30:21.064569 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:30:21.068418 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:30:21.068494 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:30:21.073476 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:30:21.073628 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:30:21.079154 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:30:21.079211 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:30:21.082271 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:30:21.100987 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:30:21.101182 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:30:21.103848 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:30:21.104087 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:30:21.109962 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:30:21.110003 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:30:21.110322 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:30:21.110363 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:30:21.185808 kernel: hv_netvsc 6045bddd-9464-6045-bddd-94646045bddd eth0: Data path switched from VF: enP21869s1 Apr 30 03:30:21.111141 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:30:21.111179 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:30:21.126818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:30:21.128995 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:30:21.137807 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:30:21.141559 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:30:21.141617 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:30:21.153313 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:30:21.153366 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:21.159126 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:30:21.159231 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:30:21.222179 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:30:21.222319 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:30:21.523298 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:30:21.523494 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:30:21.531741 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:30:21.534935 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:30:21.537729 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:30:21.552606 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:30:21.963359 systemd[1]: Switching root. Apr 30 03:30:21.996923 systemd-journald[176]: Journal stopped Apr 30 03:30:25.936855 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Apr 30 03:30:25.936900 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:30:25.936912 kernel: SELinux: policy capability open_perms=1 Apr 30 03:30:25.936924 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:30:25.936932 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:30:25.936940 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:30:25.936952 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:30:25.936963 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:30:25.936975 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:30:25.936984 kernel: audit: type=1403 audit(1745983823.267:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:30:25.936995 systemd[1]: Successfully loaded SELinux policy in 130.208ms. Apr 30 03:30:25.937009 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.341ms. Apr 30 03:30:25.937022 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:30:25.937032 systemd[1]: Detected virtualization microsoft. Apr 30 03:30:25.937048 systemd[1]: Detected architecture x86-64. Apr 30 03:30:25.937059 systemd[1]: Detected first boot. Apr 30 03:30:25.937071 systemd[1]: Hostname set to . Apr 30 03:30:25.937081 systemd[1]: Initializing machine ID from random generator. Apr 30 03:30:25.937091 zram_generator::config[1175]: No configuration found. Apr 30 03:30:25.937106 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:30:25.937116 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:30:25.937128 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 03:30:25.937139 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:30:25.937152 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:30:25.937162 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:30:25.937177 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:30:25.937189 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:30:25.937202 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:30:25.937212 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:30:25.937225 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:30:25.937235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:30:25.937247 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:30:25.937258 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:30:25.937272 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:30:25.937284 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:30:25.937295 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:30:25.937305 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:30:25.937317 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:30:25.937327 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:30:25.937340 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:30:25.937353 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:30:25.937366 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:30:25.937379 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:30:25.937392 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:30:25.937402 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:30:25.937416 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:30:25.937427 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:30:25.937440 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:30:25.937450 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:30:25.937475 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:30:25.937489 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:30:25.937500 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:30:25.937512 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:30:25.937524 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:30:25.937539 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:25.937550 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:30:25.937563 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:30:25.937574 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:30:25.937587 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:30:25.937598 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:30:25.937611 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:30:25.937623 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:30:25.937638 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:30:25.937650 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:30:25.937663 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:30:25.937676 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:30:25.937687 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:30:25.937700 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:30:25.937713 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 03:30:25.937725 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 03:30:25.937739 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:30:25.937751 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:30:25.937763 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:30:25.937775 kernel: fuse: init (API version 7.39) Apr 30 03:30:25.937785 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:30:25.937797 kernel: ACPI: bus type drm_connector registered Apr 30 03:30:25.937834 systemd-journald[1296]: Collecting audit messages is disabled. Apr 30 03:30:25.937864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:30:25.937878 systemd-journald[1296]: Journal started Apr 30 03:30:25.937902 systemd-journald[1296]: Runtime Journal (/run/log/journal/f2467ebfda574faa984d46e2dec78242) is 8.0M, max 158.8M, 150.8M free. Apr 30 03:30:25.951889 kernel: loop: module loaded Apr 30 03:30:25.965490 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:25.972377 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:30:25.975347 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:30:25.978246 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:30:25.981347 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:30:25.984062 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:30:25.986931 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:30:25.989876 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:30:25.992799 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:30:25.996210 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:30:25.999808 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:30:25.999995 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:30:26.003390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:30:26.003582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:30:26.006891 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:30:26.007084 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:30:26.010432 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:30:26.010631 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:30:26.014629 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:30:26.014815 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:30:26.018106 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:30:26.018326 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:30:26.021768 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:30:26.025206 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:30:26.030268 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:30:26.052318 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:30:26.064654 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:30:26.075643 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:30:26.079210 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:30:26.104680 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:30:26.110650 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:30:26.114185 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:30:26.120635 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:30:26.124329 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:30:26.125552 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:30:26.136622 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:30:26.145309 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:30:26.148622 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:30:26.151978 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:30:26.164705 systemd-journald[1296]: Time spent on flushing to /var/log/journal/f2467ebfda574faa984d46e2dec78242 is 24.086ms for 947 entries. Apr 30 03:30:26.164705 systemd-journald[1296]: System Journal (/var/log/journal/f2467ebfda574faa984d46e2dec78242) is 8.0M, max 2.6G, 2.6G free. Apr 30 03:30:26.209500 systemd-journald[1296]: Received client request to flush runtime journal. Apr 30 03:30:26.177674 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:30:26.186173 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:30:26.201020 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:30:26.206184 udevadm[1341]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:30:26.212437 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:30:26.239992 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:30:26.248170 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Apr 30 03:30:26.248192 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Apr 30 03:30:26.253584 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:30:26.265649 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:30:26.488111 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:30:26.498764 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:30:26.520402 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Apr 30 03:30:26.520428 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Apr 30 03:30:26.525489 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:30:27.220695 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:30:27.236633 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:30:27.261766 systemd-udevd[1362]: Using default interface naming scheme 'v255'. Apr 30 03:30:27.338485 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:30:27.351710 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:30:27.402433 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 30 03:30:27.447060 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:30:27.481485 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:30:27.544515 kernel: hv_vmbus: registering driver hv_balloon Apr 30 03:30:27.548643 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 30 03:30:27.556063 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:30:27.564487 kernel: hv_vmbus: registering driver hyperv_fb Apr 30 03:30:27.571484 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 30 03:30:27.571541 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 30 03:30:27.579143 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:30:27.585633 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:30:27.794057 systemd-networkd[1365]: lo: Link UP Apr 30 03:30:27.794524 systemd-networkd[1365]: lo: Gained carrier Apr 30 03:30:27.799688 systemd-networkd[1365]: Enumeration completed Apr 30 03:30:27.799963 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:30:27.804255 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:27.807064 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:30:27.841314 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1373) Apr 30 03:30:27.849484 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:30:27.881488 kernel: mlx5_core 556d:00:02.0 enP21869s1: Link up Apr 30 03:30:27.897883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:27.907274 kernel: hv_netvsc 6045bddd-9464-6045-bddd-94646045bddd eth0: Data path switched to VF: enP21869s1 Apr 30 03:30:27.910854 systemd-networkd[1365]: enP21869s1: Link UP Apr 30 03:30:27.911234 systemd-networkd[1365]: eth0: Link UP Apr 30 03:30:27.911242 systemd-networkd[1365]: eth0: Gained carrier Apr 30 03:30:27.911267 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:27.918570 systemd-networkd[1365]: enP21869s1: Gained carrier Apr 30 03:30:27.942507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:30:27.942852 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:27.949693 systemd-networkd[1365]: eth0: DHCPv4 address 10.200.8.29/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:30:28.028758 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 03:30:28.036482 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Apr 30 03:30:28.041011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:28.079090 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:30:28.087970 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:30:28.140562 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:30:28.173139 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:30:28.177645 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:30:28.182679 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:30:28.190783 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:30:28.215848 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:30:28.220050 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:30:28.221082 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:30:28.221106 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:30:28.221527 systemd[1]: Reached target machines.target - Containers. Apr 30 03:30:28.223128 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:30:28.233886 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:30:28.238288 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:30:28.241622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:30:28.243638 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:30:28.248680 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:30:28.255894 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:30:28.267735 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:30:28.278045 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:30:28.300504 kernel: loop0: detected capacity change from 0 to 142488 Apr 30 03:30:28.324213 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:30:28.325974 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:30:28.383437 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:28.641064 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:30:28.669571 kernel: loop1: detected capacity change from 0 to 210664 Apr 30 03:30:28.710549 kernel: loop2: detected capacity change from 0 to 31056 Apr 30 03:30:29.015725 systemd-networkd[1365]: enP21869s1: Gained IPv6LL Apr 30 03:30:29.079564 systemd-networkd[1365]: eth0: Gained IPv6LL Apr 30 03:30:29.088155 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:30:29.109487 kernel: loop3: detected capacity change from 0 to 140768 Apr 30 03:30:29.513538 kernel: loop4: detected capacity change from 0 to 142488 Apr 30 03:30:29.525480 kernel: loop5: detected capacity change from 0 to 210664 Apr 30 03:30:29.532508 kernel: loop6: detected capacity change from 0 to 31056 Apr 30 03:30:29.538630 kernel: loop7: detected capacity change from 0 to 140768 Apr 30 03:30:29.547121 (sd-merge)[1482]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 30 03:30:29.547731 (sd-merge)[1482]: Merged extensions into '/usr'. Apr 30 03:30:29.552227 systemd[1]: Reloading requested from client PID 1462 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:30:29.552244 systemd[1]: Reloading... Apr 30 03:30:29.610496 zram_generator::config[1506]: No configuration found. Apr 30 03:30:29.776122 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:30:29.855234 systemd[1]: Reloading finished in 302 ms. Apr 30 03:30:29.874741 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:30:29.892810 systemd[1]: Starting ensure-sysext.service... Apr 30 03:30:29.897336 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:30:29.910157 systemd[1]: Reloading requested from client PID 1573 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:30:29.910179 systemd[1]: Reloading... Apr 30 03:30:29.940972 systemd-tmpfiles[1574]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:30:29.942313 systemd-tmpfiles[1574]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:30:29.944118 systemd-tmpfiles[1574]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:30:29.947191 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. Apr 30 03:30:29.947397 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. Apr 30 03:30:29.964191 systemd-tmpfiles[1574]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:30:29.964380 systemd-tmpfiles[1574]: Skipping /boot Apr 30 03:30:30.003783 systemd-tmpfiles[1574]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:30:30.003802 systemd-tmpfiles[1574]: Skipping /boot Apr 30 03:30:30.017613 zram_generator::config[1603]: No configuration found. Apr 30 03:30:30.164869 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:30:30.245820 systemd[1]: Reloading finished in 335 ms. Apr 30 03:30:30.269164 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:30:30.292959 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:30:30.316760 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:30:30.323926 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:30:30.333814 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:30:30.342105 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:30:30.353231 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:30.354755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:30:30.360856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:30:30.381864 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:30:30.397809 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:30:30.401111 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:30:30.401307 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:30.405526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:30:30.405908 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:30:30.413678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:30:30.413909 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:30:30.437671 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:30.438027 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:30:30.457828 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:30:30.472917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:30:30.476069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:30:30.476253 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:30.477652 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:30:30.482329 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:30:30.487749 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:30:30.487980 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:30:30.490114 systemd-resolved[1680]: Positive Trust Anchors: Apr 30 03:30:30.490413 systemd-resolved[1680]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:30:30.490514 systemd-resolved[1680]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:30:30.495093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:30:30.495460 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:30:30.499446 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:30:30.499898 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:30:30.517548 augenrules[1705]: No rules Apr 30 03:30:30.513430 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:30:30.514966 systemd-resolved[1680]: Using system hostname 'ci-4081.3.3-a-6f0285bad0'. Apr 30 03:30:30.521419 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:30:30.528245 systemd[1]: Reached target network.target - Network. Apr 30 03:30:30.530985 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:30:30.534126 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:30:30.537415 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:30.537710 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:30:30.546671 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:30:30.553678 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:30:30.558678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:30:30.566764 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:30:30.575202 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:30:30.576126 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:30:30.578992 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:30.579963 systemd[1]: Finished ensure-sysext.service. Apr 30 03:30:30.583054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:30:30.583275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:30:30.587093 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:30:30.587303 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:30:30.590728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:30:30.590910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:30:30.594605 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:30:30.594856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:30:30.605453 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:30:30.605653 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:30:30.773567 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:30:30.777867 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:30:32.466674 ldconfig[1459]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:30:32.479963 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:30:32.492654 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:30:32.502044 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:30:32.505333 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:30:32.508431 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:30:32.511797 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:30:32.515434 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:30:32.518265 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:30:32.521435 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:30:32.525184 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:30:32.525239 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:30:32.527699 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:30:32.530877 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:30:32.535036 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:30:32.539128 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:30:32.543059 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:30:32.545825 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:30:32.548363 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:30:32.551038 systemd[1]: System is tainted: cgroupsv1 Apr 30 03:30:32.551093 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:30:32.551119 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:30:32.560556 systemd[1]: Starting chronyd.service - NTP client/server... Apr 30 03:30:32.567596 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:30:32.573631 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:30:32.578908 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:30:32.596562 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:30:32.603000 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:30:32.611698 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:30:32.611760 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 30 03:30:32.614928 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 30 03:30:32.617966 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 30 03:30:32.621605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:30:32.634460 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:30:32.640099 (chronyd)[1742]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 30 03:30:32.646609 jq[1747]: false Apr 30 03:30:32.648693 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:30:32.666637 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:30:32.679814 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:30:32.685171 extend-filesystems[1748]: Found loop4 Apr 30 03:30:32.685171 extend-filesystems[1748]: Found loop5 Apr 30 03:30:32.685171 extend-filesystems[1748]: Found loop6 Apr 30 03:30:32.685171 extend-filesystems[1748]: Found loop7 Apr 30 03:30:32.685171 extend-filesystems[1748]: Found sda Apr 30 03:30:32.685171 extend-filesystems[1748]: Found sda1 Apr 30 03:30:32.685171 extend-filesystems[1748]: Found sda2 Apr 30 03:30:32.685171 extend-filesystems[1748]: Found sda3 Apr 30 03:30:32.685171 extend-filesystems[1748]: Found usr Apr 30 03:30:32.685171 extend-filesystems[1748]: Found sda4 Apr 30 03:30:32.685171 extend-filesystems[1748]: Found sda6 Apr 30 03:30:32.685171 extend-filesystems[1748]: Found sda7 Apr 30 03:30:32.685171 extend-filesystems[1748]: Found sda9 Apr 30 03:30:32.685171 extend-filesystems[1748]: Checking size of /dev/sda9 Apr 30 03:30:32.772659 kernel: hv_utils: KVP IC version 4.0 Apr 30 03:30:32.707749 chronyd[1766]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 30 03:30:32.691742 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:30:32.780283 extend-filesystems[1748]: Old size kept for /dev/sda9 Apr 30 03:30:32.780283 extend-filesystems[1748]: Found sr0 Apr 30 03:30:32.710825 KVP[1751]: KVP starting; pid is:1751 Apr 30 03:30:32.742050 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:30:32.723741 KVP[1751]: KVP LIC Version: 3.1 Apr 30 03:30:32.746871 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:30:32.759615 chronyd[1766]: Timezone right/UTC failed leap second check, ignoring Apr 30 03:30:32.758709 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:30:32.759865 chronyd[1766]: Loaded seccomp filter (level 2) Apr 30 03:30:32.777062 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:30:32.791247 dbus-daemon[1746]: [system] SELinux support is enabled Apr 30 03:30:32.783066 systemd[1]: Started chronyd.service - NTP client/server. Apr 30 03:30:32.792861 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:30:32.800373 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:30:32.801101 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:30:32.801491 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:30:32.802730 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:30:32.818123 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:30:32.831939 jq[1783]: true Apr 30 03:30:32.818455 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:30:32.838045 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:30:32.844015 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:30:32.844301 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:30:32.887496 update_engine[1780]: I20250430 03:30:32.886003 1780 main.cc:92] Flatcar Update Engine starting Apr 30 03:30:32.899765 update_engine[1780]: I20250430 03:30:32.887763 1780 update_check_scheduler.cc:74] Next update check in 9m45s Apr 30 03:30:32.900937 (ntainerd)[1797]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:30:32.911821 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:30:32.911866 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:30:32.917503 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:30:32.917526 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:30:32.926213 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:30:32.934974 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:30:32.940714 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:30:32.948126 jq[1796]: true Apr 30 03:30:32.981720 systemd-logind[1775]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Apr 30 03:30:32.989554 coreos-metadata[1744]: Apr 30 03:30:32.989 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 03:30:32.993955 systemd-logind[1775]: New seat seat0. Apr 30 03:30:32.996631 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:30:33.002850 tar[1793]: linux-amd64/helm Apr 30 03:30:33.007944 coreos-metadata[1744]: Apr 30 03:30:33.004 INFO Fetch successful Apr 30 03:30:33.007944 coreos-metadata[1744]: Apr 30 03:30:33.004 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 30 03:30:33.014920 coreos-metadata[1744]: Apr 30 03:30:33.010 INFO Fetch successful Apr 30 03:30:33.014920 coreos-metadata[1744]: Apr 30 03:30:33.014 INFO Fetching http://168.63.129.16/machine/931f2a80-e415-45c9-8f27-609553bdc335/bf0c6ce5%2D72f3%2D41ec%2Db2ca%2Dde053f841b10.%5Fci%2D4081.3.3%2Da%2D6f0285bad0?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 30 03:30:33.018173 coreos-metadata[1744]: Apr 30 03:30:33.018 INFO Fetch successful Apr 30 03:30:33.020455 coreos-metadata[1744]: Apr 30 03:30:33.019 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 30 03:30:33.036539 coreos-metadata[1744]: Apr 30 03:30:33.036 INFO Fetch successful Apr 30 03:30:33.096375 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:30:33.109022 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:30:33.159339 bash[1840]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:30:33.160813 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:30:33.169216 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 03:30:33.267127 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1844) Apr 30 03:30:33.311477 sshd_keygen[1795]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:30:33.391872 locksmithd[1809]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:30:33.444295 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:30:33.459905 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:30:33.472956 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 30 03:30:33.482633 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:30:33.482977 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:30:33.498812 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:30:33.534567 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 30 03:30:33.557947 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:30:33.576663 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:30:33.594299 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:30:33.599282 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:30:33.907934 tar[1793]: linux-amd64/LICENSE Apr 30 03:30:33.908409 tar[1793]: linux-amd64/README.md Apr 30 03:30:33.928344 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:30:34.005169 containerd[1797]: time="2025-04-30T03:30:34.005063500Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:30:34.044691 containerd[1797]: time="2025-04-30T03:30:34.044625300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:34.047129 containerd[1797]: time="2025-04-30T03:30:34.046678800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:34.047129 containerd[1797]: time="2025-04-30T03:30:34.046724000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:30:34.047129 containerd[1797]: time="2025-04-30T03:30:34.046747100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:30:34.047129 containerd[1797]: time="2025-04-30T03:30:34.046943800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:30:34.047129 containerd[1797]: time="2025-04-30T03:30:34.046967100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:34.047129 containerd[1797]: time="2025-04-30T03:30:34.047046200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:34.047129 containerd[1797]: time="2025-04-30T03:30:34.047062500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:34.047429 containerd[1797]: time="2025-04-30T03:30:34.047345400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:34.047429 containerd[1797]: time="2025-04-30T03:30:34.047369700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:34.047429 containerd[1797]: time="2025-04-30T03:30:34.047389400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:34.047429 containerd[1797]: time="2025-04-30T03:30:34.047403400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:34.047604 containerd[1797]: time="2025-04-30T03:30:34.047549900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:34.047840 containerd[1797]: time="2025-04-30T03:30:34.047805700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:34.048062 containerd[1797]: time="2025-04-30T03:30:34.048036200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:34.048062 containerd[1797]: time="2025-04-30T03:30:34.048058600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:30:34.048196 containerd[1797]: time="2025-04-30T03:30:34.048175200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:30:34.048272 containerd[1797]: time="2025-04-30T03:30:34.048240000Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:30:34.063797 containerd[1797]: time="2025-04-30T03:30:34.063704900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:30:34.063907 containerd[1797]: time="2025-04-30T03:30:34.063862700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:30:34.063952 containerd[1797]: time="2025-04-30T03:30:34.063894700Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:30:34.063952 containerd[1797]: time="2025-04-30T03:30:34.063931900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:30:34.064038 containerd[1797]: time="2025-04-30T03:30:34.063953300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:30:34.065026 containerd[1797]: time="2025-04-30T03:30:34.064137200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:30:34.065026 containerd[1797]: time="2025-04-30T03:30:34.064865100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:30:34.065130 containerd[1797]: time="2025-04-30T03:30:34.065029900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:30:34.065130 containerd[1797]: time="2025-04-30T03:30:34.065082000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:30:34.065130 containerd[1797]: time="2025-04-30T03:30:34.065101800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:30:34.065130 containerd[1797]: time="2025-04-30T03:30:34.065123400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:30:34.065281 containerd[1797]: time="2025-04-30T03:30:34.065165000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:30:34.065281 containerd[1797]: time="2025-04-30T03:30:34.065186400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:30:34.065281 containerd[1797]: time="2025-04-30T03:30:34.065207900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:30:34.065281 containerd[1797]: time="2025-04-30T03:30:34.065241600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:30:34.065281 containerd[1797]: time="2025-04-30T03:30:34.065260800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:30:34.065281 containerd[1797]: time="2025-04-30T03:30:34.065279400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:30:34.065504 containerd[1797]: time="2025-04-30T03:30:34.065296100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:30:34.065504 containerd[1797]: time="2025-04-30T03:30:34.065345300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065504 containerd[1797]: time="2025-04-30T03:30:34.065366000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065504 containerd[1797]: time="2025-04-30T03:30:34.065396400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065504 containerd[1797]: time="2025-04-30T03:30:34.065417000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065504 containerd[1797]: time="2025-04-30T03:30:34.065434200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065504 containerd[1797]: time="2025-04-30T03:30:34.065452500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065504 containerd[1797]: time="2025-04-30T03:30:34.065484700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065782 containerd[1797]: time="2025-04-30T03:30:34.065520500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065782 containerd[1797]: time="2025-04-30T03:30:34.065555400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065782 containerd[1797]: time="2025-04-30T03:30:34.065577700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065782 containerd[1797]: time="2025-04-30T03:30:34.065594900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065782 containerd[1797]: time="2025-04-30T03:30:34.065619900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065782 containerd[1797]: time="2025-04-30T03:30:34.065656300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065782 containerd[1797]: time="2025-04-30T03:30:34.065679400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:30:34.065782 containerd[1797]: time="2025-04-30T03:30:34.065722200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065782 containerd[1797]: time="2025-04-30T03:30:34.065742500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.065782 containerd[1797]: time="2025-04-30T03:30:34.065759100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:30:34.066159 containerd[1797]: time="2025-04-30T03:30:34.065833700Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:30:34.066159 containerd[1797]: time="2025-04-30T03:30:34.065923300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:30:34.066159 containerd[1797]: time="2025-04-30T03:30:34.065941000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:30:34.066159 containerd[1797]: time="2025-04-30T03:30:34.065958300Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:30:34.066159 containerd[1797]: time="2025-04-30T03:30:34.065972500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.066159 containerd[1797]: time="2025-04-30T03:30:34.066008900Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:30:34.066159 containerd[1797]: time="2025-04-30T03:30:34.066024500Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:30:34.066159 containerd[1797]: time="2025-04-30T03:30:34.066039700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:30:34.067601 containerd[1797]: time="2025-04-30T03:30:34.066456600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:30:34.067601 containerd[1797]: time="2025-04-30T03:30:34.066577500Z" level=info msg="Connect containerd service" Apr 30 03:30:34.067601 containerd[1797]: time="2025-04-30T03:30:34.066628800Z" level=info msg="using legacy CRI server" Apr 30 03:30:34.067601 containerd[1797]: time="2025-04-30T03:30:34.066649700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:30:34.067601 containerd[1797]: time="2025-04-30T03:30:34.066830900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:30:34.067968 containerd[1797]: time="2025-04-30T03:30:34.067734300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:30:34.069038 containerd[1797]: time="2025-04-30T03:30:34.067880000Z" level=info msg="Start subscribing containerd event" Apr 30 03:30:34.069038 containerd[1797]: time="2025-04-30T03:30:34.068049900Z" level=info msg="Start recovering state" Apr 30 03:30:34.069038 containerd[1797]: time="2025-04-30T03:30:34.068221900Z" level=info msg="Start event monitor" Apr 30 03:30:34.069038 containerd[1797]: time="2025-04-30T03:30:34.068772300Z" level=info msg="Start snapshots syncer" Apr 30 03:30:34.069038 containerd[1797]: time="2025-04-30T03:30:34.068803100Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:30:34.069038 containerd[1797]: time="2025-04-30T03:30:34.068823700Z" level=info msg="Start streaming server" Apr 30 03:30:34.069038 containerd[1797]: time="2025-04-30T03:30:34.069005000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:30:34.069299 containerd[1797]: time="2025-04-30T03:30:34.069067400Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:30:34.069271 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:30:34.075083 containerd[1797]: time="2025-04-30T03:30:34.074957900Z" level=info msg="containerd successfully booted in 0.071126s" Apr 30 03:30:34.338674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:30:34.344057 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:30:34.348559 systemd[1]: Startup finished in 642ms (firmware) + 22.041s (loader) + 11.433s (kernel) + 11.205s (userspace) = 45.323s. Apr 30 03:30:34.357944 (kubelet)[1926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:30:34.601608 login[1904]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Apr 30 03:30:34.601970 login[1903]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 03:30:34.616315 systemd-logind[1775]: New session 2 of user core. Apr 30 03:30:34.617219 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:30:34.625718 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:30:34.644728 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:30:34.657997 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:30:34.663770 (systemd)[1939]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:30:34.857489 systemd[1939]: Queued start job for default target default.target. Apr 30 03:30:34.858401 systemd[1939]: Created slice app.slice - User Application Slice. Apr 30 03:30:34.858432 systemd[1939]: Reached target paths.target - Paths. Apr 30 03:30:34.858450 systemd[1939]: Reached target timers.target - Timers. Apr 30 03:30:34.863673 systemd[1939]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:30:34.876433 systemd[1939]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:30:34.876792 systemd[1939]: Reached target sockets.target - Sockets. Apr 30 03:30:34.876816 systemd[1939]: Reached target basic.target - Basic System. Apr 30 03:30:34.876864 systemd[1939]: Reached target default.target - Main User Target. Apr 30 03:30:34.876897 systemd[1939]: Startup finished in 205ms. Apr 30 03:30:34.877928 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:30:34.886654 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:30:35.109546 kubelet[1926]: E0430 03:30:35.109392 1926 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:30:35.112203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:30:35.112577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:30:35.387167 waagent[1899]: 2025-04-30T03:30:35.386981Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 30 03:30:35.404604 waagent[1899]: 2025-04-30T03:30:35.388621Z INFO Daemon Daemon OS: flatcar 4081.3.3 Apr 30 03:30:35.404604 waagent[1899]: 2025-04-30T03:30:35.389626Z INFO Daemon Daemon Python: 3.11.9 Apr 30 03:30:35.404604 waagent[1899]: 2025-04-30T03:30:35.390329Z INFO Daemon Daemon Run daemon Apr 30 03:30:35.404604 waagent[1899]: 2025-04-30T03:30:35.391209Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.3' Apr 30 03:30:35.404604 waagent[1899]: 2025-04-30T03:30:35.392072Z INFO Daemon Daemon Using waagent for provisioning Apr 30 03:30:35.404604 waagent[1899]: 2025-04-30T03:30:35.393117Z INFO Daemon Daemon Activate resource disk Apr 30 03:30:35.404604 waagent[1899]: 2025-04-30T03:30:35.393448Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 30 03:30:35.404604 waagent[1899]: 2025-04-30T03:30:35.397511Z INFO Daemon Daemon Found device: None Apr 30 03:30:35.404604 waagent[1899]: 2025-04-30T03:30:35.398453Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 30 03:30:35.404604 waagent[1899]: 2025-04-30T03:30:35.399347Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 30 03:30:35.404604 waagent[1899]: 2025-04-30T03:30:35.401991Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 03:30:35.404604 waagent[1899]: 2025-04-30T03:30:35.402204Z INFO Daemon Daemon Running default provisioning handler Apr 30 03:30:35.430206 waagent[1899]: 2025-04-30T03:30:35.430117Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 30 03:30:35.437122 waagent[1899]: 2025-04-30T03:30:35.437054Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 30 03:30:35.441755 waagent[1899]: 2025-04-30T03:30:35.441690Z INFO Daemon Daemon cloud-init is enabled: False Apr 30 03:30:35.446072 waagent[1899]: 2025-04-30T03:30:35.442716Z INFO Daemon Daemon Copying ovf-env.xml Apr 30 03:30:35.527500 waagent[1899]: 2025-04-30T03:30:35.527252Z INFO Daemon Daemon Successfully mounted dvd Apr 30 03:30:35.543493 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 30 03:30:35.544386 waagent[1899]: 2025-04-30T03:30:35.544313Z INFO Daemon Daemon Detect protocol endpoint Apr 30 03:30:35.547717 waagent[1899]: 2025-04-30T03:30:35.547648Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 03:30:35.561405 waagent[1899]: 2025-04-30T03:30:35.548887Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 30 03:30:35.561405 waagent[1899]: 2025-04-30T03:30:35.549380Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 30 03:30:35.561405 waagent[1899]: 2025-04-30T03:30:35.550559Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 30 03:30:35.561405 waagent[1899]: 2025-04-30T03:30:35.550937Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 30 03:30:35.585525 waagent[1899]: 2025-04-30T03:30:35.585438Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 30 03:30:35.595404 waagent[1899]: 2025-04-30T03:30:35.587235Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 30 03:30:35.595404 waagent[1899]: 2025-04-30T03:30:35.588118Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 30 03:30:35.603413 login[1904]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 03:30:35.608975 systemd-logind[1775]: New session 1 of user core. Apr 30 03:30:35.615873 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:30:35.662647 waagent[1899]: 2025-04-30T03:30:35.662452Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 30 03:30:35.668577 waagent[1899]: 2025-04-30T03:30:35.663915Z INFO Daemon Daemon Forcing an update of the goal state. Apr 30 03:30:35.671511 waagent[1899]: 2025-04-30T03:30:35.671442Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 03:30:35.685137 waagent[1899]: 2025-04-30T03:30:35.685072Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Apr 30 03:30:35.696999 waagent[1899]: 2025-04-30T03:30:35.686841Z INFO Daemon Apr 30 03:30:35.696999 waagent[1899]: 2025-04-30T03:30:35.688869Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d6261d17-6614-4b7d-a5cf-bb5b1277a7cd eTag: 15282202998400592635 source: Fabric] Apr 30 03:30:35.696999 waagent[1899]: 2025-04-30T03:30:35.690486Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 30 03:30:35.696999 waagent[1899]: 2025-04-30T03:30:35.691567Z INFO Daemon Apr 30 03:30:35.696999 waagent[1899]: 2025-04-30T03:30:35.691973Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 30 03:30:35.702621 waagent[1899]: 2025-04-30T03:30:35.702575Z INFO Daemon Daemon Downloading artifacts profile blob Apr 30 03:30:35.783317 waagent[1899]: 2025-04-30T03:30:35.783225Z INFO Daemon Downloaded certificate {'thumbprint': '766678C4AEB2A3E1EFE42E46E593F4B5E43C9F95', 'hasPrivateKey': True} Apr 30 03:30:35.789376 waagent[1899]: 2025-04-30T03:30:35.789315Z INFO Daemon Downloaded certificate {'thumbprint': '14A71761DA1690449D6BBCDC12C9520EBEBC4FEA', 'hasPrivateKey': False} Apr 30 03:30:35.796040 waagent[1899]: 2025-04-30T03:30:35.790830Z INFO Daemon Fetch goal state completed Apr 30 03:30:35.799094 waagent[1899]: 2025-04-30T03:30:35.799041Z INFO Daemon Daemon Starting provisioning Apr 30 03:30:35.806113 waagent[1899]: 2025-04-30T03:30:35.800232Z INFO Daemon Daemon Handle ovf-env.xml. Apr 30 03:30:35.806113 waagent[1899]: 2025-04-30T03:30:35.800741Z INFO Daemon Daemon Set hostname [ci-4081.3.3-a-6f0285bad0] Apr 30 03:30:35.819804 waagent[1899]: 2025-04-30T03:30:35.819731Z INFO Daemon Daemon Publish hostname [ci-4081.3.3-a-6f0285bad0] Apr 30 03:30:35.821596 waagent[1899]: 2025-04-30T03:30:35.821400Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 30 03:30:35.828186 waagent[1899]: 2025-04-30T03:30:35.821746Z INFO Daemon Daemon Primary interface is [eth0] Apr 30 03:30:35.843978 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:35.843987 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:30:35.844041 systemd-networkd[1365]: eth0: DHCP lease lost Apr 30 03:30:35.845498 waagent[1899]: 2025-04-30T03:30:35.845395Z INFO Daemon Daemon Create user account if not exists Apr 30 03:30:35.853151 waagent[1899]: 2025-04-30T03:30:35.847354Z INFO Daemon Daemon User core already exists, skip useradd Apr 30 03:30:35.853151 waagent[1899]: 2025-04-30T03:30:35.848096Z INFO Daemon Daemon Configure sudoer Apr 30 03:30:35.853151 waagent[1899]: 2025-04-30T03:30:35.849256Z INFO Daemon Daemon Configure sshd Apr 30 03:30:35.853151 waagent[1899]: 2025-04-30T03:30:35.849635Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 30 03:30:35.853151 waagent[1899]: 2025-04-30T03:30:35.850292Z INFO Daemon Daemon Deploy ssh public key. Apr 30 03:30:35.864590 systemd-networkd[1365]: eth0: DHCPv6 lease lost Apr 30 03:30:35.885526 systemd-networkd[1365]: eth0: DHCPv4 address 10.200.8.29/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:30:36.990830 waagent[1899]: 2025-04-30T03:30:36.990754Z INFO Daemon Daemon Provisioning complete Apr 30 03:30:37.003088 waagent[1899]: 2025-04-30T03:30:37.003016Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 30 03:30:37.010216 waagent[1899]: 2025-04-30T03:30:37.004275Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 30 03:30:37.010216 waagent[1899]: 2025-04-30T03:30:37.005202Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 30 03:30:37.131892 waagent[2000]: 2025-04-30T03:30:37.131786Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 30 03:30:37.132375 waagent[2000]: 2025-04-30T03:30:37.131964Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.3 Apr 30 03:30:37.132375 waagent[2000]: 2025-04-30T03:30:37.132049Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 30 03:30:37.163392 waagent[2000]: 2025-04-30T03:30:37.163313Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 30 03:30:37.163625 waagent[2000]: 2025-04-30T03:30:37.163575Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 03:30:37.163722 waagent[2000]: 2025-04-30T03:30:37.163680Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 03:30:37.171675 waagent[2000]: 2025-04-30T03:30:37.171608Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 03:30:37.177143 waagent[2000]: 2025-04-30T03:30:37.177090Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Apr 30 03:30:37.177603 waagent[2000]: 2025-04-30T03:30:37.177552Z INFO ExtHandler Apr 30 03:30:37.177693 waagent[2000]: 2025-04-30T03:30:37.177648Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: aac98e63-a45d-426f-9b72-8cacb9a22d2e eTag: 15282202998400592635 source: Fabric] Apr 30 03:30:37.178026 waagent[2000]: 2025-04-30T03:30:37.177970Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 30 03:30:37.178601 waagent[2000]: 2025-04-30T03:30:37.178545Z INFO ExtHandler Apr 30 03:30:37.178664 waagent[2000]: 2025-04-30T03:30:37.178629Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 30 03:30:37.182085 waagent[2000]: 2025-04-30T03:30:37.182038Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 30 03:30:37.246044 waagent[2000]: 2025-04-30T03:30:37.245887Z INFO ExtHandler Downloaded certificate {'thumbprint': '766678C4AEB2A3E1EFE42E46E593F4B5E43C9F95', 'hasPrivateKey': True} Apr 30 03:30:37.246445 waagent[2000]: 2025-04-30T03:30:37.246387Z INFO ExtHandler Downloaded certificate {'thumbprint': '14A71761DA1690449D6BBCDC12C9520EBEBC4FEA', 'hasPrivateKey': False} Apr 30 03:30:37.246960 waagent[2000]: 2025-04-30T03:30:37.246906Z INFO ExtHandler Fetch goal state completed Apr 30 03:30:37.262253 waagent[2000]: 2025-04-30T03:30:37.262180Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2000 Apr 30 03:30:37.262418 waagent[2000]: 2025-04-30T03:30:37.262366Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 30 03:30:37.264039 waagent[2000]: 2025-04-30T03:30:37.263977Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.3', '', 'Flatcar Container Linux by Kinvolk'] Apr 30 03:30:37.264424 waagent[2000]: 2025-04-30T03:30:37.264374Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 30 03:30:37.296525 waagent[2000]: 2025-04-30T03:30:37.296445Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 30 03:30:37.296795 waagent[2000]: 2025-04-30T03:30:37.296735Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 30 03:30:37.304853 waagent[2000]: 2025-04-30T03:30:37.304810Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 30 03:30:37.311667 systemd[1]: Reloading requested from client PID 2015 ('systemctl') (unit waagent.service)... Apr 30 03:30:37.311683 systemd[1]: Reloading... Apr 30 03:30:37.398493 zram_generator::config[2052]: No configuration found. Apr 30 03:30:37.522075 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:30:37.604810 systemd[1]: Reloading finished in 292 ms. Apr 30 03:30:37.632151 waagent[2000]: 2025-04-30T03:30:37.632049Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 30 03:30:37.640614 systemd[1]: Reloading requested from client PID 2111 ('systemctl') (unit waagent.service)... Apr 30 03:30:37.640631 systemd[1]: Reloading... Apr 30 03:30:37.731521 zram_generator::config[2148]: No configuration found. Apr 30 03:30:37.854708 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:30:37.936231 systemd[1]: Reloading finished in 295 ms. Apr 30 03:30:37.961914 waagent[2000]: 2025-04-30T03:30:37.961787Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 30 03:30:37.962354 waagent[2000]: 2025-04-30T03:30:37.962026Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 30 03:30:38.260034 waagent[2000]: 2025-04-30T03:30:38.259861Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 30 03:30:38.260733 waagent[2000]: 2025-04-30T03:30:38.260668Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 30 03:30:38.261560 waagent[2000]: 2025-04-30T03:30:38.261491Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 30 03:30:38.261686 waagent[2000]: 2025-04-30T03:30:38.261638Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 03:30:38.261794 waagent[2000]: 2025-04-30T03:30:38.261755Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 03:30:38.262069 waagent[2000]: 2025-04-30T03:30:38.262009Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 30 03:30:38.262507 waagent[2000]: 2025-04-30T03:30:38.262438Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 30 03:30:38.262634 waagent[2000]: 2025-04-30T03:30:38.262591Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 03:30:38.262741 waagent[2000]: 2025-04-30T03:30:38.262695Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 03:30:38.263058 waagent[2000]: 2025-04-30T03:30:38.263009Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 30 03:30:38.263227 waagent[2000]: 2025-04-30T03:30:38.263181Z INFO EnvHandler ExtHandler Configure routes Apr 30 03:30:38.263337 waagent[2000]: 2025-04-30T03:30:38.263295Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 30 03:30:38.263870 waagent[2000]: 2025-04-30T03:30:38.263814Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 30 03:30:38.263870 waagent[2000]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 30 03:30:38.263870 waagent[2000]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Apr 30 03:30:38.263870 waagent[2000]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 30 03:30:38.263870 waagent[2000]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 30 03:30:38.263870 waagent[2000]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 03:30:38.263870 waagent[2000]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 03:30:38.264266 waagent[2000]: 2025-04-30T03:30:38.264200Z INFO EnvHandler ExtHandler Gateway:None Apr 30 03:30:38.264700 waagent[2000]: 2025-04-30T03:30:38.264376Z INFO EnvHandler ExtHandler Routes:None Apr 30 03:30:38.265097 waagent[2000]: 2025-04-30T03:30:38.265048Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 30 03:30:38.265161 waagent[2000]: 2025-04-30T03:30:38.265107Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 30 03:30:38.265394 waagent[2000]: 2025-04-30T03:30:38.265356Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 30 03:30:38.271324 waagent[2000]: 2025-04-30T03:30:38.271259Z INFO ExtHandler ExtHandler Apr 30 03:30:38.271672 waagent[2000]: 2025-04-30T03:30:38.271626Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d3530038-404f-4a3b-aba7-8071209845fb correlation be0a9e9d-0463-4e85-90d2-7c8790cfa337 created: 2025-04-30T03:29:38.766724Z] Apr 30 03:30:38.272615 waagent[2000]: 2025-04-30T03:30:38.272571Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 30 03:30:38.273173 waagent[2000]: 2025-04-30T03:30:38.273126Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Apr 30 03:30:38.296843 waagent[2000]: 2025-04-30T03:30:38.296773Z INFO MonitorHandler ExtHandler Network interfaces: Apr 30 03:30:38.296843 waagent[2000]: Executing ['ip', '-a', '-o', 'link']: Apr 30 03:30:38.296843 waagent[2000]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 30 03:30:38.296843 waagent[2000]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:94:64 brd ff:ff:ff:ff:ff:ff Apr 30 03:30:38.296843 waagent[2000]: 3: enP21869s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:94:64 brd ff:ff:ff:ff:ff:ff\ altname enP21869p0s2 Apr 30 03:30:38.296843 waagent[2000]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 30 03:30:38.296843 waagent[2000]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 30 03:30:38.296843 waagent[2000]: 2: eth0 inet 10.200.8.29/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 30 03:30:38.296843 waagent[2000]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 30 03:30:38.296843 waagent[2000]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 30 03:30:38.296843 waagent[2000]: 2: eth0 inet6 fe80::6245:bdff:fedd:9464/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 03:30:38.296843 waagent[2000]: 3: enP21869s1 inet6 fe80::6245:bdff:fedd:9464/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 03:30:38.323059 waagent[2000]: 2025-04-30T03:30:38.322797Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: FCD4AF2A-17A2-458D-A1BF-58575C148C3D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 30 03:30:38.388039 waagent[2000]: 2025-04-30T03:30:38.387948Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 30 03:30:38.388039 waagent[2000]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:30:38.388039 waagent[2000]: pkts bytes target prot opt in out source destination Apr 30 03:30:38.388039 waagent[2000]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:30:38.388039 waagent[2000]: pkts bytes target prot opt in out source destination Apr 30 03:30:38.388039 waagent[2000]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:30:38.388039 waagent[2000]: pkts bytes target prot opt in out source destination Apr 30 03:30:38.388039 waagent[2000]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 03:30:38.388039 waagent[2000]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 03:30:38.388039 waagent[2000]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 03:30:38.391535 waagent[2000]: 2025-04-30T03:30:38.391441Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 30 03:30:38.391535 waagent[2000]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:30:38.391535 waagent[2000]: pkts bytes target prot opt in out source destination Apr 30 03:30:38.391535 waagent[2000]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:30:38.391535 waagent[2000]: pkts bytes target prot opt in out source destination Apr 30 03:30:38.391535 waagent[2000]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:30:38.391535 waagent[2000]: pkts bytes target prot opt in out source destination Apr 30 03:30:38.391535 waagent[2000]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 03:30:38.391535 waagent[2000]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 03:30:38.391535 waagent[2000]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 03:30:38.391956 waagent[2000]: 2025-04-30T03:30:38.391823Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 30 03:30:45.363450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:30:45.370693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:30:45.483673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:30:45.488306 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:30:46.083675 kubelet[2250]: E0430 03:30:46.083521 2250 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:30:46.087803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:30:46.088125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:30:55.663824 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:30:55.675837 systemd[1]: Started sshd@0-10.200.8.29:22-10.200.16.10:34018.service - OpenSSH per-connection server daemon (10.200.16.10:34018). Apr 30 03:30:56.239680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:30:56.245740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:30:56.349885 sshd[2258]: Accepted publickey for core from 10.200.16.10 port 34018 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:30:56.354162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:30:56.356916 sshd[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:56.364412 systemd-logind[1775]: New session 3 of user core. Apr 30 03:30:56.368002 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:30:56.369182 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:30:56.554592 chronyd[1766]: Selected source PHC0 Apr 30 03:30:56.895869 systemd[1]: Started sshd@1-10.200.8.29:22-10.200.16.10:34024.service - OpenSSH per-connection server daemon (10.200.16.10:34024). Apr 30 03:30:56.938109 kubelet[2271]: E0430 03:30:56.938041 2271 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:30:56.940767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:30:56.941113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:30:57.527802 sshd[2281]: Accepted publickey for core from 10.200.16.10 port 34024 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:30:57.529680 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:57.534909 systemd-logind[1775]: New session 4 of user core. Apr 30 03:30:57.540799 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:30:57.972074 sshd[2281]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:57.975610 systemd[1]: sshd@1-10.200.8.29:22-10.200.16.10:34024.service: Deactivated successfully. Apr 30 03:30:57.981011 systemd-logind[1775]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:30:57.981273 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:30:57.982535 systemd-logind[1775]: Removed session 4. Apr 30 03:30:58.079848 systemd[1]: Started sshd@2-10.200.8.29:22-10.200.16.10:34034.service - OpenSSH per-connection server daemon (10.200.16.10:34034). Apr 30 03:30:58.702234 sshd[2292]: Accepted publickey for core from 10.200.16.10 port 34034 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:30:58.704156 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:58.709200 systemd-logind[1775]: New session 5 of user core. Apr 30 03:30:58.716837 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:30:59.143747 sshd[2292]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:59.147907 systemd[1]: sshd@2-10.200.8.29:22-10.200.16.10:34034.service: Deactivated successfully. Apr 30 03:30:59.152128 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:30:59.152696 systemd-logind[1775]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:30:59.153806 systemd-logind[1775]: Removed session 5. Apr 30 03:30:59.250826 systemd[1]: Started sshd@3-10.200.8.29:22-10.200.16.10:55248.service - OpenSSH per-connection server daemon (10.200.16.10:55248). Apr 30 03:30:59.872036 sshd[2300]: Accepted publickey for core from 10.200.16.10 port 55248 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:30:59.873662 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:59.878621 systemd-logind[1775]: New session 6 of user core. Apr 30 03:30:59.884762 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:31:00.321337 sshd[2300]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:00.326820 systemd[1]: sshd@3-10.200.8.29:22-10.200.16.10:55248.service: Deactivated successfully. Apr 30 03:31:00.330202 systemd-logind[1775]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:31:00.330744 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:31:00.332448 systemd-logind[1775]: Removed session 6. Apr 30 03:31:00.435085 systemd[1]: Started sshd@4-10.200.8.29:22-10.200.16.10:55260.service - OpenSSH per-connection server daemon (10.200.16.10:55260). Apr 30 03:31:01.054518 sshd[2308]: Accepted publickey for core from 10.200.16.10 port 55260 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:01.056261 sshd[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:01.061432 systemd-logind[1775]: New session 7 of user core. Apr 30 03:31:01.067843 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:31:01.537715 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:31:01.538107 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:31:01.565136 sudo[2312]: pam_unix(sudo:session): session closed for user root Apr 30 03:31:01.666264 sshd[2308]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:01.671037 systemd[1]: sshd@4-10.200.8.29:22-10.200.16.10:55260.service: Deactivated successfully. Apr 30 03:31:01.675108 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:31:01.676076 systemd-logind[1775]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:31:01.677048 systemd-logind[1775]: Removed session 7. Apr 30 03:31:01.774081 systemd[1]: Started sshd@5-10.200.8.29:22-10.200.16.10:55276.service - OpenSSH per-connection server daemon (10.200.16.10:55276). Apr 30 03:31:02.393072 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 55276 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:02.394730 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:02.398868 systemd-logind[1775]: New session 8 of user core. Apr 30 03:31:02.408948 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:31:02.738638 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:31:02.739110 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:31:02.742817 sudo[2322]: pam_unix(sudo:session): session closed for user root Apr 30 03:31:02.748368 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:31:02.748754 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:31:02.761784 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:31:02.764564 auditctl[2325]: No rules Apr 30 03:31:02.764959 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:31:02.765223 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:31:02.771123 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:31:02.803700 augenrules[2344]: No rules Apr 30 03:31:02.805389 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:31:02.808910 sudo[2321]: pam_unix(sudo:session): session closed for user root Apr 30 03:31:02.909720 sshd[2317]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:02.913023 systemd[1]: sshd@5-10.200.8.29:22-10.200.16.10:55276.service: Deactivated successfully. Apr 30 03:31:02.918197 systemd-logind[1775]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:31:02.918673 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:31:02.919812 systemd-logind[1775]: Removed session 8. Apr 30 03:31:03.023105 systemd[1]: Started sshd@6-10.200.8.29:22-10.200.16.10:55282.service - OpenSSH per-connection server daemon (10.200.16.10:55282). Apr 30 03:31:03.641237 sshd[2353]: Accepted publickey for core from 10.200.16.10 port 55282 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:03.643075 sshd[2353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:03.648668 systemd-logind[1775]: New session 9 of user core. Apr 30 03:31:03.657796 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:31:03.985776 sudo[2357]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:31:03.986142 sudo[2357]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:31:05.423774 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:31:05.425757 (dockerd)[2372]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:31:06.913220 dockerd[2372]: time="2025-04-30T03:31:06.913140604Z" level=info msg="Starting up" Apr 30 03:31:06.977977 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 03:31:06.984045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:07.155682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:07.158808 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:31:07.650814 kubelet[2393]: E0430 03:31:07.650754 2393 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:31:07.653535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:31:07.653866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:31:08.012411 dockerd[2372]: time="2025-04-30T03:31:08.012360463Z" level=info msg="Loading containers: start." Apr 30 03:31:08.224515 kernel: Initializing XFRM netlink socket Apr 30 03:31:08.328677 systemd-networkd[1365]: docker0: Link UP Apr 30 03:31:08.358804 dockerd[2372]: time="2025-04-30T03:31:08.358758474Z" level=info msg="Loading containers: done." Apr 30 03:31:08.423551 dockerd[2372]: time="2025-04-30T03:31:08.423434648Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:31:08.423843 dockerd[2372]: time="2025-04-30T03:31:08.423634750Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:31:08.423843 dockerd[2372]: time="2025-04-30T03:31:08.423793852Z" level=info msg="Daemon has completed initialization" Apr 30 03:31:08.491617 dockerd[2372]: time="2025-04-30T03:31:08.491552058Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:31:08.491851 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:31:10.505701 containerd[1797]: time="2025-04-30T03:31:10.505640255Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:31:11.124813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414247634.mount: Deactivated successfully. Apr 30 03:31:13.018559 containerd[1797]: time="2025-04-30T03:31:13.018492174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:13.021830 containerd[1797]: time="2025-04-30T03:31:13.021759527Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674881" Apr 30 03:31:13.027228 containerd[1797]: time="2025-04-30T03:31:13.027184216Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:13.031758 containerd[1797]: time="2025-04-30T03:31:13.031677889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:13.033161 containerd[1797]: time="2025-04-30T03:31:13.033049412Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.527359957s" Apr 30 03:31:13.033161 containerd[1797]: time="2025-04-30T03:31:13.033095513Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:31:13.060307 containerd[1797]: time="2025-04-30T03:31:13.060265357Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:31:14.778621 containerd[1797]: time="2025-04-30T03:31:14.778561971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:14.785820 containerd[1797]: time="2025-04-30T03:31:14.785746489Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617542" Apr 30 03:31:14.789598 containerd[1797]: time="2025-04-30T03:31:14.789544751Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:14.794977 containerd[1797]: time="2025-04-30T03:31:14.794917139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:14.795934 containerd[1797]: time="2025-04-30T03:31:14.795896655Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.735589497s" Apr 30 03:31:14.796024 containerd[1797]: time="2025-04-30T03:31:14.795940355Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:31:14.820628 containerd[1797]: time="2025-04-30T03:31:14.820476057Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:31:15.653615 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Apr 30 03:31:16.030689 containerd[1797]: time="2025-04-30T03:31:16.030627257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:16.033415 containerd[1797]: time="2025-04-30T03:31:16.033346801Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903690" Apr 30 03:31:16.039197 containerd[1797]: time="2025-04-30T03:31:16.039146396Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:16.046231 containerd[1797]: time="2025-04-30T03:31:16.046182311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:16.047332 containerd[1797]: time="2025-04-30T03:31:16.047181928Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.226652169s" Apr 30 03:31:16.047332 containerd[1797]: time="2025-04-30T03:31:16.047222728Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:31:16.070182 containerd[1797]: time="2025-04-30T03:31:16.070138703Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:31:17.284694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2152358075.mount: Deactivated successfully. Apr 30 03:31:17.720591 update_engine[1780]: I20250430 03:31:17.719504 1780 update_attempter.cc:509] Updating boot flags... Apr 30 03:31:17.728075 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 03:31:17.738757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:17.806756 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2633) Apr 30 03:31:17.957688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:17.961353 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:31:18.431857 kubelet[2659]: E0430 03:31:18.431772 2659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:31:18.433540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:31:18.433752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:31:18.471612 containerd[1797]: time="2025-04-30T03:31:18.471533447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:18.473795 containerd[1797]: time="2025-04-30T03:31:18.473693371Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185825" Apr 30 03:31:18.477246 containerd[1797]: time="2025-04-30T03:31:18.477181510Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:18.483151 containerd[1797]: time="2025-04-30T03:31:18.483083675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:18.483867 containerd[1797]: time="2025-04-30T03:31:18.483689882Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.413509678s" Apr 30 03:31:18.483867 containerd[1797]: time="2025-04-30T03:31:18.483731283Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:31:18.507768 containerd[1797]: time="2025-04-30T03:31:18.507321544Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:31:19.156174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2242251741.mount: Deactivated successfully. Apr 30 03:31:20.497028 containerd[1797]: time="2025-04-30T03:31:20.496966803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:20.503387 containerd[1797]: time="2025-04-30T03:31:20.503315173Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Apr 30 03:31:20.507046 containerd[1797]: time="2025-04-30T03:31:20.506987914Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:20.512337 containerd[1797]: time="2025-04-30T03:31:20.512272073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:20.513454 containerd[1797]: time="2025-04-30T03:31:20.513297084Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.00592674s" Apr 30 03:31:20.513454 containerd[1797]: time="2025-04-30T03:31:20.513334985Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:31:20.536523 containerd[1797]: time="2025-04-30T03:31:20.536487241Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:31:21.094343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075267780.mount: Deactivated successfully. Apr 30 03:31:21.117706 containerd[1797]: time="2025-04-30T03:31:21.117646243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:21.120024 containerd[1797]: time="2025-04-30T03:31:21.119948968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Apr 30 03:31:21.124174 containerd[1797]: time="2025-04-30T03:31:21.124110715Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:21.129380 containerd[1797]: time="2025-04-30T03:31:21.129320673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:21.130086 containerd[1797]: time="2025-04-30T03:31:21.130048482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 593.520039ms" Apr 30 03:31:21.130189 containerd[1797]: time="2025-04-30T03:31:21.130093682Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:31:21.154402 containerd[1797]: time="2025-04-30T03:31:21.154183352Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:31:21.853269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2947168486.mount: Deactivated successfully. Apr 30 03:31:24.290984 containerd[1797]: time="2025-04-30T03:31:24.290918673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:24.293058 containerd[1797]: time="2025-04-30T03:31:24.292996996Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Apr 30 03:31:24.296327 containerd[1797]: time="2025-04-30T03:31:24.296273033Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:24.301297 containerd[1797]: time="2025-04-30T03:31:24.301238189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:24.302341 containerd[1797]: time="2025-04-30T03:31:24.302301200Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.148073448s" Apr 30 03:31:24.302449 containerd[1797]: time="2025-04-30T03:31:24.302347201Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:31:27.694674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:27.700788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:27.732640 systemd[1]: Reloading requested from client PID 2849 ('systemctl') (unit session-9.scope)... Apr 30 03:31:27.732669 systemd[1]: Reloading... Apr 30 03:31:27.863498 zram_generator::config[2890]: No configuration found. Apr 30 03:31:27.991165 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:31:28.071001 systemd[1]: Reloading finished in 337 ms. Apr 30 03:31:28.126002 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:28.129580 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:31:28.129943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:28.136045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:28.441716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:28.442014 (kubelet)[2974]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:31:29.122653 kubelet[2974]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:31:29.122653 kubelet[2974]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:31:29.122653 kubelet[2974]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:31:29.123195 kubelet[2974]: I0430 03:31:29.122713 2974 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:31:29.292734 kubelet[2974]: I0430 03:31:29.292689 2974 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:31:29.292734 kubelet[2974]: I0430 03:31:29.292720 2974 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:31:29.293003 kubelet[2974]: I0430 03:31:29.292983 2974 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:31:29.310511 kubelet[2974]: I0430 03:31:29.310241 2974 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:31:29.310694 kubelet[2974]: E0430 03:31:29.310633 2974 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:29.320868 kubelet[2974]: I0430 03:31:29.320607 2974 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:31:29.322153 kubelet[2974]: I0430 03:31:29.322110 2974 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:31:29.322361 kubelet[2974]: I0430 03:31:29.322153 2974 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-6f0285bad0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:31:29.322770 kubelet[2974]: I0430 03:31:29.322747 2974 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:31:29.322827 kubelet[2974]: I0430 03:31:29.322773 2974 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:31:29.322940 kubelet[2974]: I0430 03:31:29.322921 2974 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:31:29.323721 kubelet[2974]: I0430 03:31:29.323703 2974 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:31:29.323802 kubelet[2974]: I0430 03:31:29.323726 2974 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:31:29.323802 kubelet[2974]: I0430 03:31:29.323756 2974 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:31:29.323802 kubelet[2974]: I0430 03:31:29.323774 2974 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:31:29.330201 kubelet[2974]: W0430 03:31:29.330127 2974 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:29.330201 kubelet[2974]: E0430 03:31:29.330209 2974 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:29.330359 kubelet[2974]: W0430 03:31:29.330291 2974 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-6f0285bad0&limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:29.330359 kubelet[2974]: E0430 03:31:29.330339 2974 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-6f0285bad0&limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:29.332502 kubelet[2974]: I0430 03:31:29.332478 2974 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:31:29.335487 kubelet[2974]: I0430 03:31:29.334435 2974 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:31:29.335487 kubelet[2974]: W0430 03:31:29.334519 2974 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:31:29.336630 kubelet[2974]: I0430 03:31:29.336614 2974 server.go:1264] "Started kubelet" Apr 30 03:31:29.342299 kubelet[2974]: I0430 03:31:29.342246 2974 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:31:29.343441 kubelet[2974]: I0430 03:31:29.343415 2974 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:31:29.344735 kubelet[2974]: I0430 03:31:29.344678 2974 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:31:29.345020 kubelet[2974]: I0430 03:31:29.344997 2974 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:31:29.346671 kubelet[2974]: I0430 03:31:29.346650 2974 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:31:29.346806 kubelet[2974]: E0430 03:31:29.345693 2974 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.29:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-a-6f0285bad0.183afb18c3764866 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-6f0285bad0,UID:ci-4081.3.3-a-6f0285bad0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-6f0285bad0,},FirstTimestamp:2025-04-30 03:31:29.336584294 +0000 UTC m=+0.889435086,LastTimestamp:2025-04-30 03:31:29.336584294 +0000 UTC m=+0.889435086,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-6f0285bad0,}" Apr 30 03:31:29.353099 kubelet[2974]: E0430 03:31:29.352940 2974 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-6f0285bad0\" not found" Apr 30 03:31:29.353298 kubelet[2974]: I0430 03:31:29.353276 2974 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:31:29.353547 kubelet[2974]: I0430 03:31:29.353531 2974 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:31:29.353886 kubelet[2974]: I0430 03:31:29.353675 2974 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:31:29.354083 kubelet[2974]: W0430 03:31:29.354035 2974 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:29.354141 kubelet[2974]: E0430 03:31:29.354091 2974 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:29.354885 kubelet[2974]: E0430 03:31:29.354838 2974 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:31:29.355398 kubelet[2974]: E0430 03:31:29.355048 2974 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-6f0285bad0?timeout=10s\": dial tcp 10.200.8.29:6443: connect: connection refused" interval="200ms" Apr 30 03:31:29.356507 kubelet[2974]: I0430 03:31:29.355745 2974 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:31:29.358279 kubelet[2974]: I0430 03:31:29.357066 2974 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:31:29.358279 kubelet[2974]: I0430 03:31:29.357091 2974 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:31:29.376943 kubelet[2974]: I0430 03:31:29.376805 2974 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:31:29.380712 kubelet[2974]: I0430 03:31:29.380681 2974 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:31:29.380873 kubelet[2974]: I0430 03:31:29.380861 2974 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:31:29.381125 kubelet[2974]: I0430 03:31:29.381109 2974 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:31:29.382793 kubelet[2974]: E0430 03:31:29.382750 2974 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:31:29.383262 kubelet[2974]: W0430 03:31:29.383210 2974 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:29.383372 kubelet[2974]: E0430 03:31:29.383360 2974 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:29.483033 kubelet[2974]: E0430 03:31:29.482963 2974 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 03:31:29.555908 kubelet[2974]: E0430 03:31:29.555844 2974 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-6f0285bad0?timeout=10s\": dial tcp 10.200.8.29:6443: connect: connection refused" interval="400ms" Apr 30 03:31:29.618222 kubelet[2974]: I0430 03:31:29.618156 2974 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.618993 kubelet[2974]: E0430 03:31:29.618890 2974 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.29:6443/api/v1/nodes\": dial tcp 10.200.8.29:6443: connect: connection refused" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.619360 kubelet[2974]: I0430 03:31:29.619321 2974 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:31:29.619520 kubelet[2974]: I0430 03:31:29.619343 2974 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:31:29.619520 kubelet[2974]: I0430 03:31:29.619505 2974 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:31:29.631127 kubelet[2974]: I0430 03:31:29.631009 2974 policy_none.go:49] "None policy: Start" Apr 30 03:31:29.632334 kubelet[2974]: I0430 03:31:29.632312 2974 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:31:29.632433 kubelet[2974]: I0430 03:31:29.632342 2974 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:31:29.645890 kubelet[2974]: I0430 03:31:29.644458 2974 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:31:29.645890 kubelet[2974]: I0430 03:31:29.644722 2974 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:31:29.645890 kubelet[2974]: I0430 03:31:29.644861 2974 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:31:29.649452 kubelet[2974]: E0430 03:31:29.649425 2974 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-a-6f0285bad0\" not found" Apr 30 03:31:29.684088 kubelet[2974]: I0430 03:31:29.684007 2974 topology_manager.go:215] "Topology Admit Handler" podUID="dad0dc0b36395a7de6a3be505bfb3ebe" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.686491 kubelet[2974]: I0430 03:31:29.686437 2974 topology_manager.go:215] "Topology Admit Handler" podUID="e196b795abbd5d192c383ed7d70d8598" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.688829 kubelet[2974]: I0430 03:31:29.688630 2974 topology_manager.go:215] "Topology Admit Handler" podUID="24e498cc09c5fa71ac8f09837e836ac5" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.756004 kubelet[2974]: I0430 03:31:29.755883 2974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e196b795abbd5d192c383ed7d70d8598-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-6f0285bad0\" (UID: \"e196b795abbd5d192c383ed7d70d8598\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.756004 kubelet[2974]: I0430 03:31:29.755945 2974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24e498cc09c5fa71ac8f09837e836ac5-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-6f0285bad0\" (UID: \"24e498cc09c5fa71ac8f09837e836ac5\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.756004 kubelet[2974]: I0430 03:31:29.755986 2974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dad0dc0b36395a7de6a3be505bfb3ebe-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-6f0285bad0\" (UID: \"dad0dc0b36395a7de6a3be505bfb3ebe\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.756004 kubelet[2974]: I0430 03:31:29.756015 2974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dad0dc0b36395a7de6a3be505bfb3ebe-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-6f0285bad0\" (UID: \"dad0dc0b36395a7de6a3be505bfb3ebe\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.756345 kubelet[2974]: I0430 03:31:29.756041 2974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dad0dc0b36395a7de6a3be505bfb3ebe-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-6f0285bad0\" (UID: \"dad0dc0b36395a7de6a3be505bfb3ebe\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.756345 kubelet[2974]: I0430 03:31:29.756072 2974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e196b795abbd5d192c383ed7d70d8598-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-6f0285bad0\" (UID: \"e196b795abbd5d192c383ed7d70d8598\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.756345 kubelet[2974]: I0430 03:31:29.756097 2974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e196b795abbd5d192c383ed7d70d8598-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-6f0285bad0\" (UID: \"e196b795abbd5d192c383ed7d70d8598\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.756345 kubelet[2974]: I0430 03:31:29.756119 2974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e196b795abbd5d192c383ed7d70d8598-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-6f0285bad0\" (UID: \"e196b795abbd5d192c383ed7d70d8598\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.756345 kubelet[2974]: I0430 03:31:29.756147 2974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e196b795abbd5d192c383ed7d70d8598-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-6f0285bad0\" (UID: \"e196b795abbd5d192c383ed7d70d8598\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.821229 kubelet[2974]: I0430 03:31:29.821194 2974 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.821620 kubelet[2974]: E0430 03:31:29.821589 2974 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.29:6443/api/v1/nodes\": dial tcp 10.200.8.29:6443: connect: connection refused" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:29.957219 kubelet[2974]: E0430 03:31:29.957048 2974 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-6f0285bad0?timeout=10s\": dial tcp 10.200.8.29:6443: connect: connection refused" interval="800ms" Apr 30 03:31:29.994555 containerd[1797]: time="2025-04-30T03:31:29.994487811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-6f0285bad0,Uid:dad0dc0b36395a7de6a3be505bfb3ebe,Namespace:kube-system,Attempt:0,}" Apr 30 03:31:29.996158 containerd[1797]: time="2025-04-30T03:31:29.996116233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-6f0285bad0,Uid:e196b795abbd5d192c383ed7d70d8598,Namespace:kube-system,Attempt:0,}" Apr 30 03:31:30.001957 containerd[1797]: time="2025-04-30T03:31:30.001919311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-6f0285bad0,Uid:24e498cc09c5fa71ac8f09837e836ac5,Namespace:kube-system,Attempt:0,}" Apr 30 03:31:30.224890 kubelet[2974]: I0430 03:31:30.224743 2974 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:30.225672 kubelet[2974]: E0430 03:31:30.225626 2974 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.29:6443/api/v1/nodes\": dial tcp 10.200.8.29:6443: connect: connection refused" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:30.320934 kubelet[2974]: W0430 03:31:30.320847 2974 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:30.320934 kubelet[2974]: E0430 03:31:30.320935 2974 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:30.339870 kubelet[2974]: W0430 03:31:30.339788 2974 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:30.339870 kubelet[2974]: E0430 03:31:30.339874 2974 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:30.577950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702374838.mount: Deactivated successfully. Apr 30 03:31:30.646970 containerd[1797]: time="2025-04-30T03:31:30.646903754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:31:30.649933 containerd[1797]: time="2025-04-30T03:31:30.649828493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 30 03:31:30.654151 containerd[1797]: time="2025-04-30T03:31:30.654065550Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:31:30.657020 containerd[1797]: time="2025-04-30T03:31:30.656982089Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:31:30.660793 containerd[1797]: time="2025-04-30T03:31:30.660738440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:31:30.666949 containerd[1797]: time="2025-04-30T03:31:30.666885722Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:31:30.668690 containerd[1797]: time="2025-04-30T03:31:30.668324941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:31:30.673347 containerd[1797]: time="2025-04-30T03:31:30.673315208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:31:30.674092 containerd[1797]: time="2025-04-30T03:31:30.674056318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 677.843284ms" Apr 30 03:31:30.675561 containerd[1797]: time="2025-04-30T03:31:30.675528138Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 680.933826ms" Apr 30 03:31:30.677991 containerd[1797]: time="2025-04-30T03:31:30.677957170Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 675.963359ms" Apr 30 03:31:30.748015 kubelet[2974]: W0430 03:31:30.747946 2974 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-6f0285bad0&limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:30.748015 kubelet[2974]: E0430 03:31:30.748020 2974 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-6f0285bad0&limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:30.757619 kubelet[2974]: E0430 03:31:30.757559 2974 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-6f0285bad0?timeout=10s\": dial tcp 10.200.8.29:6443: connect: connection refused" interval="1.6s" Apr 30 03:31:30.865164 kubelet[2974]: W0430 03:31:30.864995 2974 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:30.865164 kubelet[2974]: E0430 03:31:30.865071 2974 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:31.028543 kubelet[2974]: I0430 03:31:31.028509 2974 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:31.028921 kubelet[2974]: E0430 03:31:31.028888 2974 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.29:6443/api/v1/nodes\": dial tcp 10.200.8.29:6443: connect: connection refused" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:31.346755 kubelet[2974]: E0430 03:31:31.346710 2974 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.29:6443: connect: connection refused Apr 30 03:31:31.353607 containerd[1797]: time="2025-04-30T03:31:31.353254920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:31.353607 containerd[1797]: time="2025-04-30T03:31:31.353383022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:31.353607 containerd[1797]: time="2025-04-30T03:31:31.353412722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:31.354258 containerd[1797]: time="2025-04-30T03:31:31.353584325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:31.354258 containerd[1797]: time="2025-04-30T03:31:31.354178833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:31.354258 containerd[1797]: time="2025-04-30T03:31:31.354230133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:31.354783 containerd[1797]: time="2025-04-30T03:31:31.354269734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:31.354783 containerd[1797]: time="2025-04-30T03:31:31.354412436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:31.356513 containerd[1797]: time="2025-04-30T03:31:31.356117959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:31.356513 containerd[1797]: time="2025-04-30T03:31:31.356174559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:31.356513 containerd[1797]: time="2025-04-30T03:31:31.356215560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:31.356513 containerd[1797]: time="2025-04-30T03:31:31.356347862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:31.443002 containerd[1797]: time="2025-04-30T03:31:31.442896522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-6f0285bad0,Uid:24e498cc09c5fa71ac8f09837e836ac5,Namespace:kube-system,Attempt:0,} returns sandbox id \"2731716749c62210bce15d1bd8ae4d37ca27cb81d7d0088add6c39cb080a2b7b\"" Apr 30 03:31:31.449960 containerd[1797]: time="2025-04-30T03:31:31.449802714Z" level=info msg="CreateContainer within sandbox \"2731716749c62210bce15d1bd8ae4d37ca27cb81d7d0088add6c39cb080a2b7b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:31:31.478490 containerd[1797]: time="2025-04-30T03:31:31.477905791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-6f0285bad0,Uid:e196b795abbd5d192c383ed7d70d8598,Namespace:kube-system,Attempt:0,} returns sandbox id \"38ea0bd50830bd7cef220d7372e835a9d1bdc6578a3ef7b5f90751cf4467de3c\"" Apr 30 03:31:31.481802 containerd[1797]: time="2025-04-30T03:31:31.481224335Z" level=info msg="CreateContainer within sandbox \"38ea0bd50830bd7cef220d7372e835a9d1bdc6578a3ef7b5f90751cf4467de3c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:31:31.484560 containerd[1797]: time="2025-04-30T03:31:31.484526980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-6f0285bad0,Uid:dad0dc0b36395a7de6a3be505bfb3ebe,Namespace:kube-system,Attempt:0,} returns sandbox id \"9eae751858ed5101e174c1ed20b97d900b14210e5132d59ad739f3a1098bd095\"" Apr 30 03:31:31.488663 containerd[1797]: time="2025-04-30T03:31:31.488544333Z" level=info msg="CreateContainer within sandbox \"9eae751858ed5101e174c1ed20b97d900b14210e5132d59ad739f3a1098bd095\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:31:31.525343 containerd[1797]: time="2025-04-30T03:31:31.525163524Z" level=info msg="CreateContainer within sandbox \"2731716749c62210bce15d1bd8ae4d37ca27cb81d7d0088add6c39cb080a2b7b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"af8bc1298ce7fa881e4ee0649f0882c6990cf13d4a3507319e162626e6650fab\"" Apr 30 03:31:31.526070 containerd[1797]: time="2025-04-30T03:31:31.526030636Z" level=info msg="StartContainer for \"af8bc1298ce7fa881e4ee0649f0882c6990cf13d4a3507319e162626e6650fab\"" Apr 30 03:31:31.579607 containerd[1797]: time="2025-04-30T03:31:31.579559753Z" level=info msg="CreateContainer within sandbox \"38ea0bd50830bd7cef220d7372e835a9d1bdc6578a3ef7b5f90751cf4467de3c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df5d32f82e17f6608a69cf8b6fc833007ea095357bfed443f3d437ef5f9bbf9b\"" Apr 30 03:31:31.580262 containerd[1797]: time="2025-04-30T03:31:31.580209862Z" level=info msg="StartContainer for \"df5d32f82e17f6608a69cf8b6fc833007ea095357bfed443f3d437ef5f9bbf9b\"" Apr 30 03:31:31.585560 containerd[1797]: time="2025-04-30T03:31:31.585520633Z" level=info msg="CreateContainer within sandbox \"9eae751858ed5101e174c1ed20b97d900b14210e5132d59ad739f3a1098bd095\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bf176c99c7c5adccc8853268639105160c2968ec147474113ec595f38ba93320\"" Apr 30 03:31:31.586387 containerd[1797]: time="2025-04-30T03:31:31.586353744Z" level=info msg="StartContainer for \"bf176c99c7c5adccc8853268639105160c2968ec147474113ec595f38ba93320\"" Apr 30 03:31:31.639538 kubelet[2974]: E0430 03:31:31.638213 2974 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.29:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-a-6f0285bad0.183afb18c3764866 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-6f0285bad0,UID:ci-4081.3.3-a-6f0285bad0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-6f0285bad0,},FirstTimestamp:2025-04-30 03:31:29.336584294 +0000 UTC m=+0.889435086,LastTimestamp:2025-04-30 03:31:29.336584294 +0000 UTC m=+0.889435086,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-6f0285bad0,}" Apr 30 03:31:31.644865 containerd[1797]: time="2025-04-30T03:31:31.642806701Z" level=info msg="StartContainer for \"af8bc1298ce7fa881e4ee0649f0882c6990cf13d4a3507319e162626e6650fab\" returns successfully" Apr 30 03:31:31.659766 systemd[1]: run-containerd-runc-k8s.io-bf176c99c7c5adccc8853268639105160c2968ec147474113ec595f38ba93320-runc.dG6x7G.mount: Deactivated successfully. Apr 30 03:31:31.768250 containerd[1797]: time="2025-04-30T03:31:31.768193881Z" level=info msg="StartContainer for \"bf176c99c7c5adccc8853268639105160c2968ec147474113ec595f38ba93320\" returns successfully" Apr 30 03:31:31.790492 containerd[1797]: time="2025-04-30T03:31:31.789898272Z" level=info msg="StartContainer for \"df5d32f82e17f6608a69cf8b6fc833007ea095357bfed443f3d437ef5f9bbf9b\" returns successfully" Apr 30 03:31:32.633482 kubelet[2974]: I0430 03:31:32.633355 2974 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:34.144529 kubelet[2974]: E0430 03:31:34.144449 2974 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-a-6f0285bad0\" not found" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:34.278078 kubelet[2974]: I0430 03:31:34.277867 2974 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:34.331224 kubelet[2974]: I0430 03:31:34.331173 2974 apiserver.go:52] "Watching apiserver" Apr 30 03:31:34.353882 kubelet[2974]: I0430 03:31:34.353814 2974 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:31:34.417883 kubelet[2974]: E0430 03:31:34.417732 2974 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-a-6f0285bad0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:36.506358 systemd[1]: Reloading requested from client PID 3242 ('systemctl') (unit session-9.scope)... Apr 30 03:31:36.506373 systemd[1]: Reloading... Apr 30 03:31:36.583547 zram_generator::config[3281]: No configuration found. Apr 30 03:31:36.738228 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:31:36.828954 systemd[1]: Reloading finished in 322 ms. Apr 30 03:31:36.866022 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:36.866810 kubelet[2974]: E0430 03:31:36.865911 2974 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081.3.3-a-6f0285bad0.183afb18c3764866 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-6f0285bad0,UID:ci-4081.3.3-a-6f0285bad0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-6f0285bad0,},FirstTimestamp:2025-04-30 03:31:29.336584294 +0000 UTC m=+0.889435086,LastTimestamp:2025-04-30 03:31:29.336584294 +0000 UTC m=+0.889435086,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-6f0285bad0,}" Apr 30 03:31:36.882044 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:31:36.882520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:36.890842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:36.996679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:37.008952 (kubelet)[3359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:31:37.579744 kubelet[3359]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:31:37.579744 kubelet[3359]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:31:37.579744 kubelet[3359]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:31:37.580323 kubelet[3359]: I0430 03:31:37.579848 3359 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:31:37.589879 kubelet[3359]: I0430 03:31:37.589838 3359 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:31:37.589879 kubelet[3359]: I0430 03:31:37.589874 3359 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:31:37.590320 kubelet[3359]: I0430 03:31:37.590124 3359 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:31:37.592727 kubelet[3359]: I0430 03:31:37.592298 3359 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:31:37.595094 kubelet[3359]: I0430 03:31:37.594294 3359 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:31:37.603648 kubelet[3359]: I0430 03:31:37.602742 3359 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:31:37.603648 kubelet[3359]: I0430 03:31:37.603306 3359 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:31:37.604002 kubelet[3359]: I0430 03:31:37.603346 3359 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-6f0285bad0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:31:37.604202 kubelet[3359]: I0430 03:31:37.604188 3359 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:31:37.604279 kubelet[3359]: I0430 03:31:37.604271 3359 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:31:37.604386 kubelet[3359]: I0430 03:31:37.604377 3359 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:31:37.604583 kubelet[3359]: I0430 03:31:37.604570 3359 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:31:37.605094 kubelet[3359]: I0430 03:31:37.605077 3359 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:31:37.605283 kubelet[3359]: I0430 03:31:37.605271 3359 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:31:37.605371 kubelet[3359]: I0430 03:31:37.605360 3359 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:31:37.609923 kubelet[3359]: I0430 03:31:37.608692 3359 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:31:37.609923 kubelet[3359]: I0430 03:31:37.608901 3359 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:31:37.609923 kubelet[3359]: I0430 03:31:37.609301 3359 server.go:1264] "Started kubelet" Apr 30 03:31:37.617741 kubelet[3359]: I0430 03:31:37.617721 3359 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:31:37.626000 kubelet[3359]: I0430 03:31:37.625967 3359 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:31:37.627448 kubelet[3359]: I0430 03:31:37.627427 3359 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:31:37.637319 kubelet[3359]: I0430 03:31:37.637246 3359 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:31:37.637774 kubelet[3359]: I0430 03:31:37.637753 3359 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:31:37.642531 kubelet[3359]: I0430 03:31:37.642511 3359 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:31:37.650956 kubelet[3359]: I0430 03:31:37.650773 3359 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:31:37.652106 kubelet[3359]: I0430 03:31:37.652090 3359 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:31:37.654767 kubelet[3359]: I0430 03:31:37.654729 3359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:31:37.656760 kubelet[3359]: I0430 03:31:37.656739 3359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:31:37.657233 kubelet[3359]: I0430 03:31:37.656858 3359 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:31:37.657233 kubelet[3359]: I0430 03:31:37.656887 3359 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:31:37.657233 kubelet[3359]: E0430 03:31:37.656936 3359 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:31:37.663870 kubelet[3359]: I0430 03:31:37.663834 3359 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:31:37.674482 kubelet[3359]: I0430 03:31:37.671959 3359 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:31:37.674482 kubelet[3359]: I0430 03:31:37.672001 3359 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:31:37.685636 kubelet[3359]: E0430 03:31:37.684597 3359 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:31:37.741392 kubelet[3359]: I0430 03:31:37.740620 3359 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:31:37.741392 kubelet[3359]: I0430 03:31:37.740640 3359 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:31:37.741392 kubelet[3359]: I0430 03:31:37.740666 3359 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:31:37.741392 kubelet[3359]: I0430 03:31:37.740855 3359 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:31:37.741392 kubelet[3359]: I0430 03:31:37.740869 3359 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:31:37.741392 kubelet[3359]: I0430 03:31:37.740892 3359 policy_none.go:49] "None policy: Start" Apr 30 03:31:37.743025 kubelet[3359]: I0430 03:31:37.741794 3359 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:31:37.743025 kubelet[3359]: I0430 03:31:37.741820 3359 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:31:37.743025 kubelet[3359]: I0430 03:31:37.742049 3359 state_mem.go:75] "Updated machine memory state" Apr 30 03:31:37.744081 kubelet[3359]: I0430 03:31:37.744053 3359 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:31:37.746981 kubelet[3359]: I0430 03:31:37.744299 3359 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:31:37.746981 kubelet[3359]: I0430 03:31:37.744429 3359 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:31:37.750909 kubelet[3359]: I0430 03:31:37.750882 3359 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.757178 kubelet[3359]: I0430 03:31:37.757148 3359 topology_manager.go:215] "Topology Admit Handler" podUID="24e498cc09c5fa71ac8f09837e836ac5" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.757390 kubelet[3359]: I0430 03:31:37.757370 3359 topology_manager.go:215] "Topology Admit Handler" podUID="dad0dc0b36395a7de6a3be505bfb3ebe" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.757588 kubelet[3359]: I0430 03:31:37.757571 3359 topology_manager.go:215] "Topology Admit Handler" podUID="e196b795abbd5d192c383ed7d70d8598" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.768334 kubelet[3359]: W0430 03:31:37.767238 3359 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:31:37.770434 kubelet[3359]: W0430 03:31:37.770383 3359 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:31:37.774166 kubelet[3359]: W0430 03:31:37.771916 3359 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:31:37.774166 kubelet[3359]: I0430 03:31:37.772022 3359 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.774166 kubelet[3359]: I0430 03:31:37.772094 3359 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.854526 kubelet[3359]: I0430 03:31:37.852876 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e196b795abbd5d192c383ed7d70d8598-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-6f0285bad0\" (UID: \"e196b795abbd5d192c383ed7d70d8598\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.854526 kubelet[3359]: I0430 03:31:37.852925 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24e498cc09c5fa71ac8f09837e836ac5-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-6f0285bad0\" (UID: \"24e498cc09c5fa71ac8f09837e836ac5\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.854526 kubelet[3359]: I0430 03:31:37.852943 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dad0dc0b36395a7de6a3be505bfb3ebe-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-6f0285bad0\" (UID: \"dad0dc0b36395a7de6a3be505bfb3ebe\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.854526 kubelet[3359]: I0430 03:31:37.852966 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dad0dc0b36395a7de6a3be505bfb3ebe-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-6f0285bad0\" (UID: \"dad0dc0b36395a7de6a3be505bfb3ebe\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.854526 kubelet[3359]: I0430 03:31:37.852988 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dad0dc0b36395a7de6a3be505bfb3ebe-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-6f0285bad0\" (UID: \"dad0dc0b36395a7de6a3be505bfb3ebe\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.854905 kubelet[3359]: I0430 03:31:37.853020 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e196b795abbd5d192c383ed7d70d8598-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-6f0285bad0\" (UID: \"e196b795abbd5d192c383ed7d70d8598\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.854905 kubelet[3359]: I0430 03:31:37.853043 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e196b795abbd5d192c383ed7d70d8598-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-6f0285bad0\" (UID: \"e196b795abbd5d192c383ed7d70d8598\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.854905 kubelet[3359]: I0430 03:31:37.853061 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e196b795abbd5d192c383ed7d70d8598-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-6f0285bad0\" (UID: \"e196b795abbd5d192c383ed7d70d8598\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:37.854905 kubelet[3359]: I0430 03:31:37.853085 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e196b795abbd5d192c383ed7d70d8598-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-6f0285bad0\" (UID: \"e196b795abbd5d192c383ed7d70d8598\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:38.618573 kubelet[3359]: I0430 03:31:38.618528 3359 apiserver.go:52] "Watching apiserver" Apr 30 03:31:38.651844 kubelet[3359]: I0430 03:31:38.651787 3359 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:31:38.698065 kubelet[3359]: I0430 03:31:38.697395 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-a-6f0285bad0" podStartSLOduration=1.697371529 podStartE2EDuration="1.697371529s" podCreationTimestamp="2025-04-30 03:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:31:38.678136209 +0000 UTC m=+1.665394863" watchObservedRunningTime="2025-04-30 03:31:38.697371529 +0000 UTC m=+1.684630183" Apr 30 03:31:38.725840 kubelet[3359]: W0430 03:31:38.725413 3359 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:31:38.727121 kubelet[3359]: W0430 03:31:38.726790 3359 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:31:38.727121 kubelet[3359]: E0430 03:31:38.726879 3359 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-a-6f0285bad0\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:38.728497 kubelet[3359]: E0430 03:31:38.727827 3359 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.3-a-6f0285bad0\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-a-6f0285bad0" Apr 30 03:31:38.728497 kubelet[3359]: I0430 03:31:38.728157 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-6f0285bad0" podStartSLOduration=1.728141581 podStartE2EDuration="1.728141581s" podCreationTimestamp="2025-04-30 03:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:31:38.698885546 +0000 UTC m=+1.686144300" watchObservedRunningTime="2025-04-30 03:31:38.728141581 +0000 UTC m=+1.715400335" Apr 30 03:31:38.745420 kubelet[3359]: I0430 03:31:38.745358 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-a-6f0285bad0" podStartSLOduration=1.7453376779999998 podStartE2EDuration="1.745337678s" podCreationTimestamp="2025-04-30 03:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:31:38.727659176 +0000 UTC m=+1.714917930" watchObservedRunningTime="2025-04-30 03:31:38.745337678 +0000 UTC m=+1.732596432" Apr 30 03:31:42.870802 sudo[2357]: pam_unix(sudo:session): session closed for user root Apr 30 03:31:42.972215 sshd[2353]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:42.975854 systemd[1]: sshd@6-10.200.8.29:22-10.200.16.10:55282.service: Deactivated successfully. Apr 30 03:31:42.982257 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:31:42.983202 systemd-logind[1775]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:31:42.984278 systemd-logind[1775]: Removed session 9. Apr 30 03:31:49.766978 kubelet[3359]: I0430 03:31:49.766914 3359 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:31:49.768221 kubelet[3359]: I0430 03:31:49.767736 3359 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:31:49.768319 containerd[1797]: time="2025-04-30T03:31:49.767439142Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:31:50.589073 kubelet[3359]: I0430 03:31:50.588998 3359 topology_manager.go:215] "Topology Admit Handler" podUID="70a36ffc-f930-41b6-bd21-d3952b2606f9" podNamespace="kube-system" podName="kube-proxy-qs5zt" Apr 30 03:31:50.627283 kubelet[3359]: I0430 03:31:50.627187 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mglzd\" (UniqueName: \"kubernetes.io/projected/70a36ffc-f930-41b6-bd21-d3952b2606f9-kube-api-access-mglzd\") pod \"kube-proxy-qs5zt\" (UID: \"70a36ffc-f930-41b6-bd21-d3952b2606f9\") " pod="kube-system/kube-proxy-qs5zt" Apr 30 03:31:50.627283 kubelet[3359]: I0430 03:31:50.627239 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70a36ffc-f930-41b6-bd21-d3952b2606f9-kube-proxy\") pod \"kube-proxy-qs5zt\" (UID: \"70a36ffc-f930-41b6-bd21-d3952b2606f9\") " pod="kube-system/kube-proxy-qs5zt" Apr 30 03:31:50.627283 kubelet[3359]: I0430 03:31:50.627265 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70a36ffc-f930-41b6-bd21-d3952b2606f9-lib-modules\") pod \"kube-proxy-qs5zt\" (UID: \"70a36ffc-f930-41b6-bd21-d3952b2606f9\") " pod="kube-system/kube-proxy-qs5zt" Apr 30 03:31:50.627587 kubelet[3359]: I0430 03:31:50.627315 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70a36ffc-f930-41b6-bd21-d3952b2606f9-xtables-lock\") pod \"kube-proxy-qs5zt\" (UID: \"70a36ffc-f930-41b6-bd21-d3952b2606f9\") " pod="kube-system/kube-proxy-qs5zt" Apr 30 03:31:50.864529 kubelet[3359]: I0430 03:31:50.863063 3359 topology_manager.go:215] "Topology Admit Handler" podUID="f57269b6-0ff1-452e-abaf-68082848e2a2" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-jc4cv" Apr 30 03:31:50.900645 containerd[1797]: time="2025-04-30T03:31:50.900602000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qs5zt,Uid:70a36ffc-f930-41b6-bd21-d3952b2606f9,Namespace:kube-system,Attempt:0,}" Apr 30 03:31:50.933320 kubelet[3359]: I0430 03:31:50.933273 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f57269b6-0ff1-452e-abaf-68082848e2a2-var-lib-calico\") pod \"tigera-operator-797db67f8-jc4cv\" (UID: \"f57269b6-0ff1-452e-abaf-68082848e2a2\") " pod="tigera-operator/tigera-operator-797db67f8-jc4cv" Apr 30 03:31:50.933320 kubelet[3359]: I0430 03:31:50.933328 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wtld\" (UniqueName: \"kubernetes.io/projected/f57269b6-0ff1-452e-abaf-68082848e2a2-kube-api-access-4wtld\") pod \"tigera-operator-797db67f8-jc4cv\" (UID: \"f57269b6-0ff1-452e-abaf-68082848e2a2\") " pod="tigera-operator/tigera-operator-797db67f8-jc4cv" Apr 30 03:31:51.178125 containerd[1797]: time="2025-04-30T03:31:51.177749343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-jc4cv,Uid:f57269b6-0ff1-452e-abaf-68082848e2a2,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:31:52.240782 containerd[1797]: time="2025-04-30T03:31:52.240271674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:52.240782 containerd[1797]: time="2025-04-30T03:31:52.240342075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:52.240782 containerd[1797]: time="2025-04-30T03:31:52.240360375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:52.240782 containerd[1797]: time="2025-04-30T03:31:52.240726380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:52.294026 containerd[1797]: time="2025-04-30T03:31:52.293391796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:52.294026 containerd[1797]: time="2025-04-30T03:31:52.293500297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:52.294026 containerd[1797]: time="2025-04-30T03:31:52.293529397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:52.294026 containerd[1797]: time="2025-04-30T03:31:52.293634699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:52.324890 containerd[1797]: time="2025-04-30T03:31:52.323643850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qs5zt,Uid:70a36ffc-f930-41b6-bd21-d3952b2606f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5afbc92ee86eba1bb83a225c42e182cc574a569a034ececec47b62909b05147\"" Apr 30 03:31:52.332418 containerd[1797]: time="2025-04-30T03:31:52.332374352Z" level=info msg="CreateContainer within sandbox \"f5afbc92ee86eba1bb83a225c42e182cc574a569a034ececec47b62909b05147\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:31:52.375227 containerd[1797]: time="2025-04-30T03:31:52.375185453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-jc4cv,Uid:f57269b6-0ff1-452e-abaf-68082848e2a2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"856901c15979d21a4e1b3092717b88ca74804474e83a6d003146ffb42e4cdbe0\"" Apr 30 03:31:52.378731 containerd[1797]: time="2025-04-30T03:31:52.378689894Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:31:52.386726 containerd[1797]: time="2025-04-30T03:31:52.386690787Z" level=info msg="CreateContainer within sandbox \"f5afbc92ee86eba1bb83a225c42e182cc574a569a034ececec47b62909b05147\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"241e550a79c65588db1b2971fb0137cbe170354ab0e4c3983e34eb1ce8b98faf\"" Apr 30 03:31:52.387333 containerd[1797]: time="2025-04-30T03:31:52.387239794Z" level=info msg="StartContainer for \"241e550a79c65588db1b2971fb0137cbe170354ab0e4c3983e34eb1ce8b98faf\"" Apr 30 03:31:52.452708 containerd[1797]: time="2025-04-30T03:31:52.452656359Z" level=info msg="StartContainer for \"241e550a79c65588db1b2971fb0137cbe170354ab0e4c3983e34eb1ce8b98faf\" returns successfully" Apr 30 03:31:54.779979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount58941701.mount: Deactivated successfully. Apr 30 03:31:55.909274 containerd[1797]: time="2025-04-30T03:31:55.909216414Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:55.912685 containerd[1797]: time="2025-04-30T03:31:55.912607656Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:31:55.917910 containerd[1797]: time="2025-04-30T03:31:55.917843122Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:55.925607 containerd[1797]: time="2025-04-30T03:31:55.925544918Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:55.926790 containerd[1797]: time="2025-04-30T03:31:55.926212526Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 3.547474132s" Apr 30 03:31:55.926790 containerd[1797]: time="2025-04-30T03:31:55.926258027Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:31:55.929477 containerd[1797]: time="2025-04-30T03:31:55.929419367Z" level=info msg="CreateContainer within sandbox \"856901c15979d21a4e1b3092717b88ca74804474e83a6d003146ffb42e4cdbe0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:31:55.977627 containerd[1797]: time="2025-04-30T03:31:55.977528868Z" level=info msg="CreateContainer within sandbox \"856901c15979d21a4e1b3092717b88ca74804474e83a6d003146ffb42e4cdbe0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a7fe53b99b2b79c03ef6da7a2dc175f72c33d7689e2353649a0233e95c4a9711\"" Apr 30 03:31:55.979402 containerd[1797]: time="2025-04-30T03:31:55.978337778Z" level=info msg="StartContainer for \"a7fe53b99b2b79c03ef6da7a2dc175f72c33d7689e2353649a0233e95c4a9711\"" Apr 30 03:31:56.038760 containerd[1797]: time="2025-04-30T03:31:56.038705632Z" level=info msg="StartContainer for \"a7fe53b99b2b79c03ef6da7a2dc175f72c33d7689e2353649a0233e95c4a9711\" returns successfully" Apr 30 03:31:56.767291 kubelet[3359]: I0430 03:31:56.766940 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-jc4cv" podStartSLOduration=3.216026659 podStartE2EDuration="6.766917533s" podCreationTimestamp="2025-04-30 03:31:50 +0000 UTC" firstStartedPulling="2025-04-30 03:31:52.376448267 +0000 UTC m=+15.363707021" lastFinishedPulling="2025-04-30 03:31:55.927339241 +0000 UTC m=+18.914597895" observedRunningTime="2025-04-30 03:31:56.766893733 +0000 UTC m=+19.754152487" watchObservedRunningTime="2025-04-30 03:31:56.766917533 +0000 UTC m=+19.754176287" Apr 30 03:31:56.767291 kubelet[3359]: I0430 03:31:56.767184 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qs5zt" podStartSLOduration=6.767171836 podStartE2EDuration="6.767171836s" podCreationTimestamp="2025-04-30 03:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:31:52.756394911 +0000 UTC m=+15.743653565" watchObservedRunningTime="2025-04-30 03:31:56.767171836 +0000 UTC m=+19.754430590" Apr 30 03:31:59.146498 kubelet[3359]: I0430 03:31:59.144861 3359 topology_manager.go:215] "Topology Admit Handler" podUID="dcf52fbf-70fa-4121-8c2b-f5f795f4c175" podNamespace="calico-system" podName="calico-typha-84f8d99796-kbfb7" Apr 30 03:31:59.190562 kubelet[3359]: I0430 03:31:59.190512 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcf52fbf-70fa-4121-8c2b-f5f795f4c175-tigera-ca-bundle\") pod \"calico-typha-84f8d99796-kbfb7\" (UID: \"dcf52fbf-70fa-4121-8c2b-f5f795f4c175\") " pod="calico-system/calico-typha-84f8d99796-kbfb7" Apr 30 03:31:59.192723 kubelet[3359]: I0430 03:31:59.192583 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qvpm\" (UniqueName: \"kubernetes.io/projected/dcf52fbf-70fa-4121-8c2b-f5f795f4c175-kube-api-access-6qvpm\") pod \"calico-typha-84f8d99796-kbfb7\" (UID: \"dcf52fbf-70fa-4121-8c2b-f5f795f4c175\") " pod="calico-system/calico-typha-84f8d99796-kbfb7" Apr 30 03:31:59.192723 kubelet[3359]: I0430 03:31:59.192652 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dcf52fbf-70fa-4121-8c2b-f5f795f4c175-typha-certs\") pod \"calico-typha-84f8d99796-kbfb7\" (UID: \"dcf52fbf-70fa-4121-8c2b-f5f795f4c175\") " pod="calico-system/calico-typha-84f8d99796-kbfb7" Apr 30 03:31:59.267588 kubelet[3359]: I0430 03:31:59.267533 3359 topology_manager.go:215] "Topology Admit Handler" podUID="cfcfd262-ecad-4d56-ac3f-c505dfd6db0d" podNamespace="calico-system" podName="calico-node-2gkv6" Apr 30 03:31:59.293668 kubelet[3359]: I0430 03:31:59.293618 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cfcfd262-ecad-4d56-ac3f-c505dfd6db0d-cni-net-dir\") pod \"calico-node-2gkv6\" (UID: \"cfcfd262-ecad-4d56-ac3f-c505dfd6db0d\") " pod="calico-system/calico-node-2gkv6" Apr 30 03:31:59.293668 kubelet[3359]: I0430 03:31:59.293677 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfcfd262-ecad-4d56-ac3f-c505dfd6db0d-tigera-ca-bundle\") pod \"calico-node-2gkv6\" (UID: \"cfcfd262-ecad-4d56-ac3f-c505dfd6db0d\") " pod="calico-system/calico-node-2gkv6" Apr 30 03:31:59.293936 kubelet[3359]: I0430 03:31:59.293706 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cfcfd262-ecad-4d56-ac3f-c505dfd6db0d-var-run-calico\") pod \"calico-node-2gkv6\" (UID: \"cfcfd262-ecad-4d56-ac3f-c505dfd6db0d\") " pod="calico-system/calico-node-2gkv6" Apr 30 03:31:59.293936 kubelet[3359]: I0430 03:31:59.293727 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfcfd262-ecad-4d56-ac3f-c505dfd6db0d-xtables-lock\") pod \"calico-node-2gkv6\" (UID: \"cfcfd262-ecad-4d56-ac3f-c505dfd6db0d\") " pod="calico-system/calico-node-2gkv6" Apr 30 03:31:59.293936 kubelet[3359]: I0430 03:31:59.293747 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cfcfd262-ecad-4d56-ac3f-c505dfd6db0d-flexvol-driver-host\") pod \"calico-node-2gkv6\" (UID: \"cfcfd262-ecad-4d56-ac3f-c505dfd6db0d\") " pod="calico-system/calico-node-2gkv6" Apr 30 03:31:59.293936 kubelet[3359]: I0430 03:31:59.293771 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cfcfd262-ecad-4d56-ac3f-c505dfd6db0d-cni-bin-dir\") pod \"calico-node-2gkv6\" (UID: \"cfcfd262-ecad-4d56-ac3f-c505dfd6db0d\") " pod="calico-system/calico-node-2gkv6" Apr 30 03:31:59.293936 kubelet[3359]: I0430 03:31:59.293791 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cfcfd262-ecad-4d56-ac3f-c505dfd6db0d-cni-log-dir\") pod \"calico-node-2gkv6\" (UID: \"cfcfd262-ecad-4d56-ac3f-c505dfd6db0d\") " pod="calico-system/calico-node-2gkv6" Apr 30 03:31:59.294119 kubelet[3359]: I0430 03:31:59.293812 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngwsz\" (UniqueName: \"kubernetes.io/projected/cfcfd262-ecad-4d56-ac3f-c505dfd6db0d-kube-api-access-ngwsz\") pod \"calico-node-2gkv6\" (UID: \"cfcfd262-ecad-4d56-ac3f-c505dfd6db0d\") " pod="calico-system/calico-node-2gkv6" Apr 30 03:31:59.294119 kubelet[3359]: I0430 03:31:59.293857 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfcfd262-ecad-4d56-ac3f-c505dfd6db0d-lib-modules\") pod \"calico-node-2gkv6\" (UID: \"cfcfd262-ecad-4d56-ac3f-c505dfd6db0d\") " pod="calico-system/calico-node-2gkv6" Apr 30 03:31:59.294119 kubelet[3359]: I0430 03:31:59.293877 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cfcfd262-ecad-4d56-ac3f-c505dfd6db0d-var-lib-calico\") pod \"calico-node-2gkv6\" (UID: \"cfcfd262-ecad-4d56-ac3f-c505dfd6db0d\") " pod="calico-system/calico-node-2gkv6" Apr 30 03:31:59.294119 kubelet[3359]: I0430 03:31:59.293906 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cfcfd262-ecad-4d56-ac3f-c505dfd6db0d-policysync\") pod \"calico-node-2gkv6\" (UID: \"cfcfd262-ecad-4d56-ac3f-c505dfd6db0d\") " pod="calico-system/calico-node-2gkv6" Apr 30 03:31:59.294119 kubelet[3359]: I0430 03:31:59.293927 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cfcfd262-ecad-4d56-ac3f-c505dfd6db0d-node-certs\") pod \"calico-node-2gkv6\" (UID: \"cfcfd262-ecad-4d56-ac3f-c505dfd6db0d\") " pod="calico-system/calico-node-2gkv6" Apr 30 03:31:59.406557 kubelet[3359]: E0430 03:31:59.403695 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.406557 kubelet[3359]: W0430 03:31:59.403724 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.406557 kubelet[3359]: E0430 03:31:59.403749 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.412787 kubelet[3359]: E0430 03:31:59.412068 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.412787 kubelet[3359]: W0430 03:31:59.412089 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.412787 kubelet[3359]: E0430 03:31:59.412116 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.457183 containerd[1797]: time="2025-04-30T03:31:59.456550247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84f8d99796-kbfb7,Uid:dcf52fbf-70fa-4121-8c2b-f5f795f4c175,Namespace:calico-system,Attempt:0,}" Apr 30 03:31:59.479191 kubelet[3359]: I0430 03:31:59.479136 3359 topology_manager.go:215] "Topology Admit Handler" podUID="4c8bd750-0601-46f1-814d-82809dd1a74f" podNamespace="calico-system" podName="csi-node-driver-f5dfm" Apr 30 03:31:59.480590 kubelet[3359]: E0430 03:31:59.479572 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5dfm" podUID="4c8bd750-0601-46f1-814d-82809dd1a74f" Apr 30 03:31:59.481324 kubelet[3359]: E0430 03:31:59.481199 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.481895 kubelet[3359]: W0430 03:31:59.481497 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.481895 kubelet[3359]: E0430 03:31:59.481528 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.483373 kubelet[3359]: E0430 03:31:59.483162 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.484334 kubelet[3359]: W0430 03:31:59.483179 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.484334 kubelet[3359]: E0430 03:31:59.483847 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.486077 kubelet[3359]: E0430 03:31:59.485945 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.486077 kubelet[3359]: W0430 03:31:59.485960 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.486077 kubelet[3359]: E0430 03:31:59.486014 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.488885 kubelet[3359]: E0430 03:31:59.488439 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.488885 kubelet[3359]: W0430 03:31:59.488486 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.488885 kubelet[3359]: E0430 03:31:59.488507 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.489674 kubelet[3359]: E0430 03:31:59.489552 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.489674 kubelet[3359]: W0430 03:31:59.489567 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.490026 kubelet[3359]: E0430 03:31:59.489792 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.490269 kubelet[3359]: E0430 03:31:59.490257 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.490443 kubelet[3359]: W0430 03:31:59.490358 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.490443 kubelet[3359]: E0430 03:31:59.490395 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.490940 kubelet[3359]: E0430 03:31:59.490822 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.490940 kubelet[3359]: W0430 03:31:59.490837 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.490940 kubelet[3359]: E0430 03:31:59.490865 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.491320 kubelet[3359]: E0430 03:31:59.491308 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.491492 kubelet[3359]: W0430 03:31:59.491406 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.491492 kubelet[3359]: E0430 03:31:59.491437 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.491910 kubelet[3359]: E0430 03:31:59.491813 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.491910 kubelet[3359]: W0430 03:31:59.491827 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.491910 kubelet[3359]: E0430 03:31:59.491867 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.492310 kubelet[3359]: E0430 03:31:59.492240 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.492310 kubelet[3359]: W0430 03:31:59.492253 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.492310 kubelet[3359]: E0430 03:31:59.492266 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.492768 kubelet[3359]: E0430 03:31:59.492671 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.492768 kubelet[3359]: W0430 03:31:59.492695 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.492768 kubelet[3359]: E0430 03:31:59.492710 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.493423 kubelet[3359]: E0430 03:31:59.493335 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.493423 kubelet[3359]: W0430 03:31:59.493351 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.493423 kubelet[3359]: E0430 03:31:59.493364 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.494183 kubelet[3359]: E0430 03:31:59.494006 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.494183 kubelet[3359]: W0430 03:31:59.494020 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.494183 kubelet[3359]: E0430 03:31:59.494034 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.494538 kubelet[3359]: E0430 03:31:59.494417 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.494538 kubelet[3359]: W0430 03:31:59.494429 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.494538 kubelet[3359]: E0430 03:31:59.494442 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.496581 kubelet[3359]: E0430 03:31:59.496519 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.496581 kubelet[3359]: W0430 03:31:59.496535 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.496581 kubelet[3359]: E0430 03:31:59.496549 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.497379 kubelet[3359]: E0430 03:31:59.496951 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.497379 kubelet[3359]: W0430 03:31:59.496965 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.497379 kubelet[3359]: E0430 03:31:59.497147 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.499236 kubelet[3359]: E0430 03:31:59.499126 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.499236 kubelet[3359]: W0430 03:31:59.499140 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.499236 kubelet[3359]: E0430 03:31:59.499154 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.505759 kubelet[3359]: E0430 03:31:59.499901 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.505759 kubelet[3359]: W0430 03:31:59.499915 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.505759 kubelet[3359]: E0430 03:31:59.499931 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.505759 kubelet[3359]: E0430 03:31:59.501182 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.505759 kubelet[3359]: W0430 03:31:59.501197 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.505759 kubelet[3359]: E0430 03:31:59.501212 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.505759 kubelet[3359]: E0430 03:31:59.502553 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.505759 kubelet[3359]: W0430 03:31:59.502566 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.505759 kubelet[3359]: E0430 03:31:59.502583 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.506300 kubelet[3359]: E0430 03:31:59.506282 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.507528 kubelet[3359]: W0430 03:31:59.507501 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.507638 kubelet[3359]: E0430 03:31:59.507626 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.507857 kubelet[3359]: I0430 03:31:59.507731 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c8bd750-0601-46f1-814d-82809dd1a74f-socket-dir\") pod \"csi-node-driver-f5dfm\" (UID: \"4c8bd750-0601-46f1-814d-82809dd1a74f\") " pod="calico-system/csi-node-driver-f5dfm" Apr 30 03:31:59.508192 kubelet[3359]: E0430 03:31:59.508174 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.508480 kubelet[3359]: W0430 03:31:59.508385 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.508480 kubelet[3359]: E0430 03:31:59.508409 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.508480 kubelet[3359]: I0430 03:31:59.508434 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27tth\" (UniqueName: \"kubernetes.io/projected/4c8bd750-0601-46f1-814d-82809dd1a74f-kube-api-access-27tth\") pod \"csi-node-driver-f5dfm\" (UID: \"4c8bd750-0601-46f1-814d-82809dd1a74f\") " pod="calico-system/csi-node-driver-f5dfm" Apr 30 03:31:59.509704 kubelet[3359]: E0430 03:31:59.509543 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.509704 kubelet[3359]: W0430 03:31:59.509562 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.509704 kubelet[3359]: E0430 03:31:59.509582 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.509704 kubelet[3359]: I0430 03:31:59.509605 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4c8bd750-0601-46f1-814d-82809dd1a74f-varrun\") pod \"csi-node-driver-f5dfm\" (UID: \"4c8bd750-0601-46f1-814d-82809dd1a74f\") " pod="calico-system/csi-node-driver-f5dfm" Apr 30 03:31:59.510037 kubelet[3359]: E0430 03:31:59.509906 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.510037 kubelet[3359]: W0430 03:31:59.509919 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.511731 kubelet[3359]: E0430 03:31:59.510453 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.511731 kubelet[3359]: I0430 03:31:59.510516 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c8bd750-0601-46f1-814d-82809dd1a74f-kubelet-dir\") pod \"csi-node-driver-f5dfm\" (UID: \"4c8bd750-0601-46f1-814d-82809dd1a74f\") " pod="calico-system/csi-node-driver-f5dfm" Apr 30 03:31:59.512053 kubelet[3359]: E0430 03:31:59.511941 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.512053 kubelet[3359]: W0430 03:31:59.511983 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.512393 kubelet[3359]: E0430 03:31:59.512285 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.512565 kubelet[3359]: E0430 03:31:59.512538 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.512958 kubelet[3359]: W0430 03:31:59.512752 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.513580 kubelet[3359]: E0430 03:31:59.513181 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.514104 kubelet[3359]: E0430 03:31:59.513815 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.514104 kubelet[3359]: W0430 03:31:59.513830 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.514104 kubelet[3359]: E0430 03:31:59.513924 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.516725 kubelet[3359]: E0430 03:31:59.514845 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.516725 kubelet[3359]: W0430 03:31:59.514860 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.516725 kubelet[3359]: E0430 03:31:59.516656 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.516725 kubelet[3359]: I0430 03:31:59.516691 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c8bd750-0601-46f1-814d-82809dd1a74f-registration-dir\") pod \"csi-node-driver-f5dfm\" (UID: \"4c8bd750-0601-46f1-814d-82809dd1a74f\") " pod="calico-system/csi-node-driver-f5dfm" Apr 30 03:31:59.517292 kubelet[3359]: E0430 03:31:59.517100 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.517292 kubelet[3359]: W0430 03:31:59.517113 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.517292 kubelet[3359]: E0430 03:31:59.517216 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.517668 kubelet[3359]: E0430 03:31:59.517507 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.517668 kubelet[3359]: W0430 03:31:59.517521 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.517668 kubelet[3359]: E0430 03:31:59.517534 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.518131 kubelet[3359]: E0430 03:31:59.517933 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.518131 kubelet[3359]: W0430 03:31:59.517946 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.518131 kubelet[3359]: E0430 03:31:59.517970 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.518455 kubelet[3359]: E0430 03:31:59.518329 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.518455 kubelet[3359]: W0430 03:31:59.518341 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.518455 kubelet[3359]: E0430 03:31:59.518353 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.520337 kubelet[3359]: E0430 03:31:59.520104 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.520337 kubelet[3359]: W0430 03:31:59.520119 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.520337 kubelet[3359]: E0430 03:31:59.520134 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.520672 kubelet[3359]: E0430 03:31:59.520565 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.520672 kubelet[3359]: W0430 03:31:59.520578 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.520672 kubelet[3359]: E0430 03:31:59.520592 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.521696 kubelet[3359]: E0430 03:31:59.521621 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.521696 kubelet[3359]: W0430 03:31:59.521637 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.521696 kubelet[3359]: E0430 03:31:59.521651 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.575173 containerd[1797]: time="2025-04-30T03:31:59.574663223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:59.575768 containerd[1797]: time="2025-04-30T03:31:59.575104829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:59.575768 containerd[1797]: time="2025-04-30T03:31:59.575131229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:59.578485 containerd[1797]: time="2025-04-30T03:31:59.577547459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:59.578782 containerd[1797]: time="2025-04-30T03:31:59.578745774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2gkv6,Uid:cfcfd262-ecad-4d56-ac3f-c505dfd6db0d,Namespace:calico-system,Attempt:0,}" Apr 30 03:31:59.619999 kubelet[3359]: E0430 03:31:59.619895 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.619999 kubelet[3359]: W0430 03:31:59.619918 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.619999 kubelet[3359]: E0430 03:31:59.619946 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.620276 kubelet[3359]: E0430 03:31:59.620247 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.620276 kubelet[3359]: W0430 03:31:59.620259 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.620276 kubelet[3359]: E0430 03:31:59.620273 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.621582 kubelet[3359]: E0430 03:31:59.620606 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.621582 kubelet[3359]: W0430 03:31:59.620621 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.621582 kubelet[3359]: E0430 03:31:59.620635 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.621582 kubelet[3359]: E0430 03:31:59.620901 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.621582 kubelet[3359]: W0430 03:31:59.620912 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.621582 kubelet[3359]: E0430 03:31:59.620926 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.622800 kubelet[3359]: E0430 03:31:59.622168 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.622800 kubelet[3359]: W0430 03:31:59.622181 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.622800 kubelet[3359]: E0430 03:31:59.622201 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.625403 kubelet[3359]: E0430 03:31:59.623092 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.625403 kubelet[3359]: W0430 03:31:59.623112 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.625403 kubelet[3359]: E0430 03:31:59.623127 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.625403 kubelet[3359]: E0430 03:31:59.623350 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.625403 kubelet[3359]: W0430 03:31:59.623362 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.625403 kubelet[3359]: E0430 03:31:59.623376 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.625403 kubelet[3359]: E0430 03:31:59.624529 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.625403 kubelet[3359]: W0430 03:31:59.624541 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.625871 kubelet[3359]: E0430 03:31:59.625428 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.625871 kubelet[3359]: W0430 03:31:59.625440 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.627031 kubelet[3359]: E0430 03:31:59.627004 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.627031 kubelet[3359]: W0430 03:31:59.627028 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.629236 kubelet[3359]: E0430 03:31:59.629214 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.630500 kubelet[3359]: W0430 03:31:59.630355 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.631266 kubelet[3359]: E0430 03:31:59.631048 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.631266 kubelet[3359]: W0430 03:31:59.631065 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.631266 kubelet[3359]: E0430 03:31:59.631082 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.631825 kubelet[3359]: E0430 03:31:59.631669 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.631825 kubelet[3359]: W0430 03:31:59.631785 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.631825 kubelet[3359]: E0430 03:31:59.631805 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.632955 kubelet[3359]: E0430 03:31:59.632347 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.632955 kubelet[3359]: W0430 03:31:59.632486 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.632955 kubelet[3359]: E0430 03:31:59.632505 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.634143 kubelet[3359]: E0430 03:31:59.633242 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.634143 kubelet[3359]: W0430 03:31:59.633434 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.634143 kubelet[3359]: E0430 03:31:59.633498 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.635552 kubelet[3359]: E0430 03:31:59.630311 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.636573 kubelet[3359]: E0430 03:31:59.636553 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.636677 kubelet[3359]: W0430 03:31:59.636658 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.636741 kubelet[3359]: E0430 03:31:59.636680 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.636741 kubelet[3359]: E0430 03:31:59.630335 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.637331 kubelet[3359]: E0430 03:31:59.637313 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.637331 kubelet[3359]: W0430 03:31:59.637331 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.637584 kubelet[3359]: E0430 03:31:59.637495 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.638176 kubelet[3359]: E0430 03:31:59.638159 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.638176 kubelet[3359]: W0430 03:31:59.638176 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.638489 kubelet[3359]: E0430 03:31:59.638192 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.639116 kubelet[3359]: E0430 03:31:59.638996 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.639286 kubelet[3359]: W0430 03:31:59.639122 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.639286 kubelet[3359]: E0430 03:31:59.639140 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.642070 kubelet[3359]: E0430 03:31:59.642039 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.642904 kubelet[3359]: E0430 03:31:59.630343 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.642904 kubelet[3359]: E0430 03:31:59.642765 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.642904 kubelet[3359]: W0430 03:31:59.642777 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.642904 kubelet[3359]: E0430 03:31:59.642799 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.643364 kubelet[3359]: E0430 03:31:59.643344 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.643728 kubelet[3359]: W0430 03:31:59.643363 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.643728 kubelet[3359]: E0430 03:31:59.643502 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.649548 kubelet[3359]: E0430 03:31:59.649523 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.649548 kubelet[3359]: W0430 03:31:59.649548 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.650149 kubelet[3359]: E0430 03:31:59.649569 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.668895 kubelet[3359]: E0430 03:31:59.668645 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.668895 kubelet[3359]: W0430 03:31:59.668673 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.668895 kubelet[3359]: E0430 03:31:59.668696 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.673866 kubelet[3359]: E0430 03:31:59.673796 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.673866 kubelet[3359]: W0430 03:31:59.673816 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.673866 kubelet[3359]: E0430 03:31:59.673833 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.685522 kubelet[3359]: E0430 03:31:59.681553 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.685522 kubelet[3359]: W0430 03:31:59.681647 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.685522 kubelet[3359]: E0430 03:31:59.681667 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.691991 kubelet[3359]: E0430 03:31:59.691949 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:31:59.691991 kubelet[3359]: W0430 03:31:59.691971 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:31:59.691991 kubelet[3359]: E0430 03:31:59.691991 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:31:59.696141 containerd[1797]: time="2025-04-30T03:31:59.695711336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:59.696141 containerd[1797]: time="2025-04-30T03:31:59.695783537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:59.696141 containerd[1797]: time="2025-04-30T03:31:59.695806037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:59.696141 containerd[1797]: time="2025-04-30T03:31:59.695913639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:59.745372 containerd[1797]: time="2025-04-30T03:31:59.745314956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84f8d99796-kbfb7,Uid:dcf52fbf-70fa-4121-8c2b-f5f795f4c175,Namespace:calico-system,Attempt:0,} returns sandbox id \"207ba3058355483a3fd37916dadf02dcc0436d3705b8dc32a2906e783aa46743\"" Apr 30 03:31:59.750505 containerd[1797]: time="2025-04-30T03:31:59.750347719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:31:59.792039 containerd[1797]: time="2025-04-30T03:31:59.791844338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2gkv6,Uid:cfcfd262-ecad-4d56-ac3f-c505dfd6db0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"21441fca554f2609310317ee8a58e4f7eba2836419925a183bfc21b1b398877b\"" Apr 30 03:32:01.658548 kubelet[3359]: E0430 03:32:01.657598 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5dfm" podUID="4c8bd750-0601-46f1-814d-82809dd1a74f" Apr 30 03:32:02.316813 containerd[1797]: time="2025-04-30T03:32:02.316757976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:02.325189 containerd[1797]: time="2025-04-30T03:32:02.325127775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:32:02.328978 containerd[1797]: time="2025-04-30T03:32:02.328925120Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:02.335845 containerd[1797]: time="2025-04-30T03:32:02.335489098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:02.336640 containerd[1797]: time="2025-04-30T03:32:02.336363408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.585966889s" Apr 30 03:32:02.336640 containerd[1797]: time="2025-04-30T03:32:02.336404909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:32:02.338031 containerd[1797]: time="2025-04-30T03:32:02.337999027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:32:02.357327 containerd[1797]: time="2025-04-30T03:32:02.357282456Z" level=info msg="CreateContainer within sandbox \"207ba3058355483a3fd37916dadf02dcc0436d3705b8dc32a2906e783aa46743\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:32:02.472442 containerd[1797]: time="2025-04-30T03:32:02.472388417Z" level=info msg="CreateContainer within sandbox \"207ba3058355483a3fd37916dadf02dcc0436d3705b8dc32a2906e783aa46743\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ce9c7b4af1613e61e725bb736e8a63d48956ad1233f9d5fa1535826bf70c7d4f\"" Apr 30 03:32:02.474652 containerd[1797]: time="2025-04-30T03:32:02.473093326Z" level=info msg="StartContainer for \"ce9c7b4af1613e61e725bb736e8a63d48956ad1233f9d5fa1535826bf70c7d4f\"" Apr 30 03:32:02.549621 containerd[1797]: time="2025-04-30T03:32:02.549570930Z" level=info msg="StartContainer for \"ce9c7b4af1613e61e725bb736e8a63d48956ad1233f9d5fa1535826bf70c7d4f\" returns successfully" Apr 30 03:32:02.827519 kubelet[3359]: E0430 03:32:02.827483 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.827519 kubelet[3359]: W0430 03:32:02.827509 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.828171 kubelet[3359]: E0430 03:32:02.827533 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.828171 kubelet[3359]: E0430 03:32:02.827805 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.828171 kubelet[3359]: W0430 03:32:02.827821 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.828171 kubelet[3359]: E0430 03:32:02.827836 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.828171 kubelet[3359]: E0430 03:32:02.828076 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.828171 kubelet[3359]: W0430 03:32:02.828086 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.828171 kubelet[3359]: E0430 03:32:02.828100 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.828603 kubelet[3359]: E0430 03:32:02.828309 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.828603 kubelet[3359]: W0430 03:32:02.828320 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.828603 kubelet[3359]: E0430 03:32:02.828332 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.828603 kubelet[3359]: E0430 03:32:02.828567 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.828603 kubelet[3359]: W0430 03:32:02.828580 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.828603 kubelet[3359]: E0430 03:32:02.828593 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.828986 kubelet[3359]: E0430 03:32:02.828801 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.828986 kubelet[3359]: W0430 03:32:02.828811 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.828986 kubelet[3359]: E0430 03:32:02.828822 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.829159 kubelet[3359]: E0430 03:32:02.829021 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.829159 kubelet[3359]: W0430 03:32:02.829032 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.829159 kubelet[3359]: E0430 03:32:02.829045 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.829367 kubelet[3359]: E0430 03:32:02.829314 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.829367 kubelet[3359]: W0430 03:32:02.829325 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.829367 kubelet[3359]: E0430 03:32:02.829339 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.829651 kubelet[3359]: E0430 03:32:02.829603 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.829651 kubelet[3359]: W0430 03:32:02.829617 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.829651 kubelet[3359]: E0430 03:32:02.829631 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.829856 kubelet[3359]: E0430 03:32:02.829838 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.829856 kubelet[3359]: W0430 03:32:02.829854 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.829978 kubelet[3359]: E0430 03:32:02.829868 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.830082 kubelet[3359]: E0430 03:32:02.830068 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.830082 kubelet[3359]: W0430 03:32:02.830080 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.830215 kubelet[3359]: E0430 03:32:02.830093 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.830313 kubelet[3359]: E0430 03:32:02.830298 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.830313 kubelet[3359]: W0430 03:32:02.830310 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.830416 kubelet[3359]: E0430 03:32:02.830324 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.830608 kubelet[3359]: E0430 03:32:02.830592 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.830608 kubelet[3359]: W0430 03:32:02.830605 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.830780 kubelet[3359]: E0430 03:32:02.830619 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.830858 kubelet[3359]: E0430 03:32:02.830844 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.830858 kubelet[3359]: W0430 03:32:02.830855 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.830987 kubelet[3359]: E0430 03:32:02.830868 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.831068 kubelet[3359]: E0430 03:32:02.831055 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.831118 kubelet[3359]: W0430 03:32:02.831070 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.831118 kubelet[3359]: E0430 03:32:02.831083 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.846483 kubelet[3359]: E0430 03:32:02.846441 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.846483 kubelet[3359]: W0430 03:32:02.846487 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.846764 kubelet[3359]: E0430 03:32:02.846507 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.846911 kubelet[3359]: E0430 03:32:02.846814 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.846911 kubelet[3359]: W0430 03:32:02.846828 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.846911 kubelet[3359]: E0430 03:32:02.846849 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.847097 kubelet[3359]: E0430 03:32:02.847076 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.847097 kubelet[3359]: W0430 03:32:02.847094 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.847189 kubelet[3359]: E0430 03:32:02.847115 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.847354 kubelet[3359]: E0430 03:32:02.847336 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.847354 kubelet[3359]: W0430 03:32:02.847351 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.847541 kubelet[3359]: E0430 03:32:02.847372 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.847624 kubelet[3359]: E0430 03:32:02.847592 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.847624 kubelet[3359]: W0430 03:32:02.847604 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.847758 kubelet[3359]: E0430 03:32:02.847623 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.847891 kubelet[3359]: E0430 03:32:02.847874 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.847891 kubelet[3359]: W0430 03:32:02.847888 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.847998 kubelet[3359]: E0430 03:32:02.847906 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.848306 kubelet[3359]: E0430 03:32:02.848271 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.848306 kubelet[3359]: W0430 03:32:02.848286 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.848439 kubelet[3359]: E0430 03:32:02.848379 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.848572 kubelet[3359]: E0430 03:32:02.848559 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.848640 kubelet[3359]: W0430 03:32:02.848573 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.848680 kubelet[3359]: E0430 03:32:02.848666 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.848871 kubelet[3359]: E0430 03:32:02.848856 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.848941 kubelet[3359]: W0430 03:32:02.848923 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.849000 kubelet[3359]: E0430 03:32:02.848946 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.849173 kubelet[3359]: E0430 03:32:02.849158 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.849173 kubelet[3359]: W0430 03:32:02.849171 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.849290 kubelet[3359]: E0430 03:32:02.849189 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.849423 kubelet[3359]: E0430 03:32:02.849407 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.849423 kubelet[3359]: W0430 03:32:02.849421 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.849571 kubelet[3359]: E0430 03:32:02.849440 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.849716 kubelet[3359]: E0430 03:32:02.849702 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.849716 kubelet[3359]: W0430 03:32:02.849714 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.849827 kubelet[3359]: E0430 03:32:02.849732 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.850290 kubelet[3359]: E0430 03:32:02.850185 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.850290 kubelet[3359]: W0430 03:32:02.850218 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.850290 kubelet[3359]: E0430 03:32:02.850250 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.850551 kubelet[3359]: E0430 03:32:02.850537 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.850709 kubelet[3359]: W0430 03:32:02.850554 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.850709 kubelet[3359]: E0430 03:32:02.850581 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.850963 kubelet[3359]: E0430 03:32:02.850940 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.850963 kubelet[3359]: W0430 03:32:02.850957 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.851104 kubelet[3359]: E0430 03:32:02.850975 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.851232 kubelet[3359]: E0430 03:32:02.851212 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.851232 kubelet[3359]: W0430 03:32:02.851227 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.851366 kubelet[3359]: E0430 03:32:02.851243 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.851580 kubelet[3359]: E0430 03:32:02.851562 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.851580 kubelet[3359]: W0430 03:32:02.851577 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.851730 kubelet[3359]: E0430 03:32:02.851593 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:02.852064 kubelet[3359]: E0430 03:32:02.852044 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:02.852064 kubelet[3359]: W0430 03:32:02.852060 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:02.852174 kubelet[3359]: E0430 03:32:02.852077 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.659453 kubelet[3359]: E0430 03:32:03.657877 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5dfm" podUID="4c8bd750-0601-46f1-814d-82809dd1a74f" Apr 30 03:32:03.800652 kubelet[3359]: I0430 03:32:03.800571 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84f8d99796-kbfb7" podStartSLOduration=2.212505916 podStartE2EDuration="4.800541129s" podCreationTimestamp="2025-04-30 03:31:59 +0000 UTC" firstStartedPulling="2025-04-30 03:31:59.749530909 +0000 UTC m=+22.736789663" lastFinishedPulling="2025-04-30 03:32:02.337566222 +0000 UTC m=+25.324824876" observedRunningTime="2025-04-30 03:32:02.791178388 +0000 UTC m=+25.778437042" watchObservedRunningTime="2025-04-30 03:32:03.800541129 +0000 UTC m=+26.787799883" Apr 30 03:32:03.838966 kubelet[3359]: E0430 03:32:03.838809 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.840379 kubelet[3359]: W0430 03:32:03.839230 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.840379 kubelet[3359]: E0430 03:32:03.839276 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.840379 kubelet[3359]: E0430 03:32:03.840190 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.840379 kubelet[3359]: W0430 03:32:03.840206 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.840379 kubelet[3359]: E0430 03:32:03.840224 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.841600 kubelet[3359]: E0430 03:32:03.841554 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.841600 kubelet[3359]: W0430 03:32:03.841569 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.841600 kubelet[3359]: E0430 03:32:03.841586 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.842191 kubelet[3359]: E0430 03:32:03.841915 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.842191 kubelet[3359]: W0430 03:32:03.841927 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.842191 kubelet[3359]: E0430 03:32:03.841943 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.842943 kubelet[3359]: E0430 03:32:03.842538 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.842943 kubelet[3359]: W0430 03:32:03.842553 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.842943 kubelet[3359]: E0430 03:32:03.842567 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.842943 kubelet[3359]: E0430 03:32:03.842792 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.842943 kubelet[3359]: W0430 03:32:03.842812 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.842943 kubelet[3359]: E0430 03:32:03.842825 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.843652 kubelet[3359]: E0430 03:32:03.843411 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.843652 kubelet[3359]: W0430 03:32:03.843427 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.843652 kubelet[3359]: E0430 03:32:03.843442 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.844030 kubelet[3359]: E0430 03:32:03.844011 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.844030 kubelet[3359]: W0430 03:32:03.844027 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.844151 kubelet[3359]: E0430 03:32:03.844041 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.844361 kubelet[3359]: E0430 03:32:03.844344 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.844361 kubelet[3359]: W0430 03:32:03.844360 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.844506 kubelet[3359]: E0430 03:32:03.844374 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.844907 kubelet[3359]: E0430 03:32:03.844774 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.844907 kubelet[3359]: W0430 03:32:03.844789 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.844907 kubelet[3359]: E0430 03:32:03.844803 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.845383 kubelet[3359]: E0430 03:32:03.845337 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.845383 kubelet[3359]: W0430 03:32:03.845352 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.845383 kubelet[3359]: E0430 03:32:03.845366 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.846362 kubelet[3359]: E0430 03:32:03.845805 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.846362 kubelet[3359]: W0430 03:32:03.845818 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.846362 kubelet[3359]: E0430 03:32:03.845831 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.846362 kubelet[3359]: E0430 03:32:03.846309 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.846362 kubelet[3359]: W0430 03:32:03.846322 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.846362 kubelet[3359]: E0430 03:32:03.846337 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.846961 kubelet[3359]: E0430 03:32:03.846627 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.846961 kubelet[3359]: W0430 03:32:03.846639 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.846961 kubelet[3359]: E0430 03:32:03.846653 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.847366 kubelet[3359]: E0430 03:32:03.847211 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.847366 kubelet[3359]: W0430 03:32:03.847223 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.847366 kubelet[3359]: E0430 03:32:03.847239 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.856767 kubelet[3359]: E0430 03:32:03.856607 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.856767 kubelet[3359]: W0430 03:32:03.856624 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.856767 kubelet[3359]: E0430 03:32:03.856641 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.857425 kubelet[3359]: E0430 03:32:03.857388 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.857425 kubelet[3359]: W0430 03:32:03.857403 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.857766 kubelet[3359]: E0430 03:32:03.857583 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.857898 kubelet[3359]: E0430 03:32:03.857866 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.857898 kubelet[3359]: W0430 03:32:03.857879 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.858210 kubelet[3359]: E0430 03:32:03.857895 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.859362 kubelet[3359]: E0430 03:32:03.859347 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.859486 kubelet[3359]: W0430 03:32:03.859456 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.859564 kubelet[3359]: E0430 03:32:03.859551 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.860376 kubelet[3359]: E0430 03:32:03.860256 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.860376 kubelet[3359]: W0430 03:32:03.860270 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.860376 kubelet[3359]: E0430 03:32:03.860349 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.860711 kubelet[3359]: E0430 03:32:03.860684 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.860711 kubelet[3359]: W0430 03:32:03.860698 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.861040 kubelet[3359]: E0430 03:32:03.861022 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.861302 kubelet[3359]: E0430 03:32:03.861193 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.861302 kubelet[3359]: W0430 03:32:03.861207 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.861302 kubelet[3359]: E0430 03:32:03.861234 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.861728 kubelet[3359]: E0430 03:32:03.861569 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.861728 kubelet[3359]: W0430 03:32:03.861586 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.861728 kubelet[3359]: E0430 03:32:03.861605 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.862245 kubelet[3359]: E0430 03:32:03.862187 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.862245 kubelet[3359]: W0430 03:32:03.862203 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.862245 kubelet[3359]: E0430 03:32:03.862232 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.863039 kubelet[3359]: E0430 03:32:03.863017 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.863039 kubelet[3359]: W0430 03:32:03.863030 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.863272 kubelet[3359]: E0430 03:32:03.863052 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.863272 kubelet[3359]: E0430 03:32:03.863257 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.863272 kubelet[3359]: W0430 03:32:03.863268 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.863539 kubelet[3359]: E0430 03:32:03.863282 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.863813 kubelet[3359]: E0430 03:32:03.863621 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.863895 kubelet[3359]: W0430 03:32:03.863636 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.864001 kubelet[3359]: E0430 03:32:03.863984 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.864226 kubelet[3359]: E0430 03:32:03.864122 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.864226 kubelet[3359]: W0430 03:32:03.864135 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.864430 kubelet[3359]: E0430 03:32:03.864399 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.864430 kubelet[3359]: W0430 03:32:03.864409 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.864430 kubelet[3359]: E0430 03:32:03.864422 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.864598 kubelet[3359]: E0430 03:32:03.864458 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.864996 kubelet[3359]: E0430 03:32:03.864704 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.864996 kubelet[3359]: W0430 03:32:03.864717 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.864996 kubelet[3359]: E0430 03:32:03.864730 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.865434 kubelet[3359]: E0430 03:32:03.865106 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.865434 kubelet[3359]: W0430 03:32:03.865120 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.865434 kubelet[3359]: E0430 03:32:03.865142 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.866419 kubelet[3359]: E0430 03:32:03.866185 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.866419 kubelet[3359]: W0430 03:32:03.866202 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.866419 kubelet[3359]: E0430 03:32:03.866288 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.866611 kubelet[3359]: E0430 03:32:03.866601 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:32:03.866661 kubelet[3359]: W0430 03:32:03.866612 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:32:03.866661 kubelet[3359]: E0430 03:32:03.866627 3359 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:32:03.955432 containerd[1797]: time="2025-04-30T03:32:03.955274259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:03.965269 containerd[1797]: time="2025-04-30T03:32:03.963869161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:32:03.980117 containerd[1797]: time="2025-04-30T03:32:03.979166142Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:03.983480 containerd[1797]: time="2025-04-30T03:32:03.983423992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:03.984950 containerd[1797]: time="2025-04-30T03:32:03.984913810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.646870882s" Apr 30 03:32:03.985033 containerd[1797]: time="2025-04-30T03:32:03.984956810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:32:03.988243 containerd[1797]: time="2025-04-30T03:32:03.987493940Z" level=info msg="CreateContainer within sandbox \"21441fca554f2609310317ee8a58e4f7eba2836419925a183bfc21b1b398877b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:32:04.030283 containerd[1797]: time="2025-04-30T03:32:04.030232046Z" level=info msg="CreateContainer within sandbox \"21441fca554f2609310317ee8a58e4f7eba2836419925a183bfc21b1b398877b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9176a895f9c33a8d64ed22a05569dd51a8eb65b1d05688c7a8278f17a2fee78a\"" Apr 30 03:32:04.031137 containerd[1797]: time="2025-04-30T03:32:04.031101856Z" level=info msg="StartContainer for \"9176a895f9c33a8d64ed22a05569dd51a8eb65b1d05688c7a8278f17a2fee78a\"" Apr 30 03:32:04.103371 containerd[1797]: time="2025-04-30T03:32:04.103246410Z" level=info msg="StartContainer for \"9176a895f9c33a8d64ed22a05569dd51a8eb65b1d05688c7a8278f17a2fee78a\" returns successfully" Apr 30 03:32:04.142377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9176a895f9c33a8d64ed22a05569dd51a8eb65b1d05688c7a8278f17a2fee78a-rootfs.mount: Deactivated successfully. Apr 30 03:32:05.536940 containerd[1797]: time="2025-04-30T03:32:05.536840169Z" level=info msg="shim disconnected" id=9176a895f9c33a8d64ed22a05569dd51a8eb65b1d05688c7a8278f17a2fee78a namespace=k8s.io Apr 30 03:32:05.536940 containerd[1797]: time="2025-04-30T03:32:05.536916170Z" level=warning msg="cleaning up after shim disconnected" id=9176a895f9c33a8d64ed22a05569dd51a8eb65b1d05688c7a8278f17a2fee78a namespace=k8s.io Apr 30 03:32:05.536940 containerd[1797]: time="2025-04-30T03:32:05.536929870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:05.658040 kubelet[3359]: E0430 03:32:05.657922 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5dfm" podUID="4c8bd750-0601-46f1-814d-82809dd1a74f" Apr 30 03:32:05.790422 containerd[1797]: time="2025-04-30T03:32:05.789310155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:32:07.658702 kubelet[3359]: E0430 03:32:07.658235 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5dfm" podUID="4c8bd750-0601-46f1-814d-82809dd1a74f" Apr 30 03:32:09.659992 kubelet[3359]: E0430 03:32:09.658556 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5dfm" podUID="4c8bd750-0601-46f1-814d-82809dd1a74f" Apr 30 03:32:09.957073 containerd[1797]: time="2025-04-30T03:32:09.956775136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:09.963394 containerd[1797]: time="2025-04-30T03:32:09.963191711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:32:09.967244 containerd[1797]: time="2025-04-30T03:32:09.967037755Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:09.972030 containerd[1797]: time="2025-04-30T03:32:09.971994713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:09.972805 containerd[1797]: time="2025-04-30T03:32:09.972655421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 4.183294865s" Apr 30 03:32:09.972805 containerd[1797]: time="2025-04-30T03:32:09.972698421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:32:09.975997 containerd[1797]: time="2025-04-30T03:32:09.975765157Z" level=info msg="CreateContainer within sandbox \"21441fca554f2609310317ee8a58e4f7eba2836419925a183bfc21b1b398877b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:32:10.040840 containerd[1797]: time="2025-04-30T03:32:10.040787115Z" level=info msg="CreateContainer within sandbox \"21441fca554f2609310317ee8a58e4f7eba2836419925a183bfc21b1b398877b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5e69e3cb109927564bb39e3ef27eb9fb88591944d430462147ae86b01d0dd380\"" Apr 30 03:32:10.043350 containerd[1797]: time="2025-04-30T03:32:10.041667126Z" level=info msg="StartContainer for \"5e69e3cb109927564bb39e3ef27eb9fb88591944d430462147ae86b01d0dd380\"" Apr 30 03:32:10.082181 systemd[1]: run-containerd-runc-k8s.io-5e69e3cb109927564bb39e3ef27eb9fb88591944d430462147ae86b01d0dd380-runc.DGDQVM.mount: Deactivated successfully. Apr 30 03:32:10.118569 containerd[1797]: time="2025-04-30T03:32:10.118513022Z" level=info msg="StartContainer for \"5e69e3cb109927564bb39e3ef27eb9fb88591944d430462147ae86b01d0dd380\" returns successfully" Apr 30 03:32:11.615655 kubelet[3359]: I0430 03:32:11.615152 3359 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:32:11.629771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e69e3cb109927564bb39e3ef27eb9fb88591944d430462147ae86b01d0dd380-rootfs.mount: Deactivated successfully. Apr 30 03:32:11.678948 containerd[1797]: time="2025-04-30T03:32:11.677684403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5dfm,Uid:4c8bd750-0601-46f1-814d-82809dd1a74f,Namespace:calico-system,Attempt:0,}" Apr 30 03:32:11.681691 kubelet[3359]: I0430 03:32:11.681656 3359 topology_manager.go:215] "Topology Admit Handler" podUID="340e92ff-7ea8-4903-9227-eed397cdce47" podNamespace="kube-system" podName="coredns-7db6d8ff4d-l2bpt" Apr 30 03:32:11.681930 kubelet[3359]: I0430 03:32:11.681908 3359 topology_manager.go:215] "Topology Admit Handler" podUID="a2f13ad9-2be6-4a46-b67d-267afc984299" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wxch7" Apr 30 03:32:11.682637 kubelet[3359]: I0430 03:32:11.682611 3359 topology_manager.go:215] "Topology Admit Handler" podUID="72a78e6a-2103-489a-9bb3-6f815a567a66" podNamespace="calico-system" podName="calico-kube-controllers-69f9ffdfcc-4vn5q" Apr 30 03:32:11.682834 kubelet[3359]: I0430 03:32:11.682814 3359 topology_manager.go:215] "Topology Admit Handler" podUID="bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656" podNamespace="calico-apiserver" podName="calico-apiserver-6cc8c4d69c-f48sp" Apr 30 03:32:11.683875 kubelet[3359]: I0430 03:32:11.683836 3359 topology_manager.go:215] "Topology Admit Handler" podUID="f183badf-71b7-4297-a2f8-acdc049a5567" podNamespace="calico-apiserver" podName="calico-apiserver-6cc8c4d69c-vf8m8" Apr 30 03:32:11.727919 kubelet[3359]: I0430 03:32:11.727049 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4tjs\" (UniqueName: \"kubernetes.io/projected/340e92ff-7ea8-4903-9227-eed397cdce47-kube-api-access-q4tjs\") pod \"coredns-7db6d8ff4d-l2bpt\" (UID: \"340e92ff-7ea8-4903-9227-eed397cdce47\") " pod="kube-system/coredns-7db6d8ff4d-l2bpt" Apr 30 03:32:11.727919 kubelet[3359]: I0430 03:32:11.727091 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f183badf-71b7-4297-a2f8-acdc049a5567-calico-apiserver-certs\") pod \"calico-apiserver-6cc8c4d69c-vf8m8\" (UID: \"f183badf-71b7-4297-a2f8-acdc049a5567\") " pod="calico-apiserver/calico-apiserver-6cc8c4d69c-vf8m8" Apr 30 03:32:11.727919 kubelet[3359]: I0430 03:32:11.727117 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v8lb\" (UniqueName: \"kubernetes.io/projected/72a78e6a-2103-489a-9bb3-6f815a567a66-kube-api-access-8v8lb\") pod \"calico-kube-controllers-69f9ffdfcc-4vn5q\" (UID: \"72a78e6a-2103-489a-9bb3-6f815a567a66\") " pod="calico-system/calico-kube-controllers-69f9ffdfcc-4vn5q" Apr 30 03:32:11.727919 kubelet[3359]: I0430 03:32:11.727171 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv4rn\" (UniqueName: \"kubernetes.io/projected/bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656-kube-api-access-gv4rn\") pod \"calico-apiserver-6cc8c4d69c-f48sp\" (UID: \"bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656\") " pod="calico-apiserver/calico-apiserver-6cc8c4d69c-f48sp" Apr 30 03:32:11.727919 kubelet[3359]: I0430 03:32:11.727203 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72a78e6a-2103-489a-9bb3-6f815a567a66-tigera-ca-bundle\") pod \"calico-kube-controllers-69f9ffdfcc-4vn5q\" (UID: \"72a78e6a-2103-489a-9bb3-6f815a567a66\") " pod="calico-system/calico-kube-controllers-69f9ffdfcc-4vn5q" Apr 30 03:32:11.728872 kubelet[3359]: I0430 03:32:11.727232 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2f13ad9-2be6-4a46-b67d-267afc984299-config-volume\") pod \"coredns-7db6d8ff4d-wxch7\" (UID: \"a2f13ad9-2be6-4a46-b67d-267afc984299\") " pod="kube-system/coredns-7db6d8ff4d-wxch7" Apr 30 03:32:11.728872 kubelet[3359]: I0430 03:32:11.727258 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/340e92ff-7ea8-4903-9227-eed397cdce47-config-volume\") pod \"coredns-7db6d8ff4d-l2bpt\" (UID: \"340e92ff-7ea8-4903-9227-eed397cdce47\") " pod="kube-system/coredns-7db6d8ff4d-l2bpt" Apr 30 03:32:11.728872 kubelet[3359]: I0430 03:32:11.727281 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjv6b\" (UniqueName: \"kubernetes.io/projected/f183badf-71b7-4297-a2f8-acdc049a5567-kube-api-access-zjv6b\") pod \"calico-apiserver-6cc8c4d69c-vf8m8\" (UID: \"f183badf-71b7-4297-a2f8-acdc049a5567\") " pod="calico-apiserver/calico-apiserver-6cc8c4d69c-vf8m8" Apr 30 03:32:11.728872 kubelet[3359]: I0430 03:32:11.727306 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656-calico-apiserver-certs\") pod \"calico-apiserver-6cc8c4d69c-f48sp\" (UID: \"bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656\") " pod="calico-apiserver/calico-apiserver-6cc8c4d69c-f48sp" Apr 30 03:32:11.728872 kubelet[3359]: I0430 03:32:11.727350 3359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smdvr\" (UniqueName: \"kubernetes.io/projected/a2f13ad9-2be6-4a46-b67d-267afc984299-kube-api-access-smdvr\") pod \"coredns-7db6d8ff4d-wxch7\" (UID: \"a2f13ad9-2be6-4a46-b67d-267afc984299\") " pod="kube-system/coredns-7db6d8ff4d-wxch7" Apr 30 03:32:11.989654 containerd[1797]: time="2025-04-30T03:32:11.989603141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c4d69c-vf8m8,Uid:f183badf-71b7-4297-a2f8-acdc049a5567,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:32:12.002911 containerd[1797]: time="2025-04-30T03:32:12.002717194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69f9ffdfcc-4vn5q,Uid:72a78e6a-2103-489a-9bb3-6f815a567a66,Namespace:calico-system,Attempt:0,}" Apr 30 03:32:12.003253 containerd[1797]: time="2025-04-30T03:32:12.003220199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2bpt,Uid:340e92ff-7ea8-4903-9227-eed397cdce47,Namespace:kube-system,Attempt:0,}" Apr 30 03:32:12.003611 containerd[1797]: time="2025-04-30T03:32:12.003400901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c4d69c-f48sp,Uid:bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:32:12.003611 containerd[1797]: time="2025-04-30T03:32:12.003546103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxch7,Uid:a2f13ad9-2be6-4a46-b67d-267afc984299,Namespace:kube-system,Attempt:0,}" Apr 30 03:32:13.297356 containerd[1797]: time="2025-04-30T03:32:13.297150288Z" level=info msg="shim disconnected" id=5e69e3cb109927564bb39e3ef27eb9fb88591944d430462147ae86b01d0dd380 namespace=k8s.io Apr 30 03:32:13.297356 containerd[1797]: time="2025-04-30T03:32:13.297213989Z" level=warning msg="cleaning up after shim disconnected" id=5e69e3cb109927564bb39e3ef27eb9fb88591944d430462147ae86b01d0dd380 namespace=k8s.io Apr 30 03:32:13.297356 containerd[1797]: time="2025-04-30T03:32:13.297225889Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:13.665764 containerd[1797]: time="2025-04-30T03:32:13.665339781Z" level=error msg="Failed to destroy network for sandbox \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.667240 containerd[1797]: time="2025-04-30T03:32:13.667200803Z" level=error msg="encountered an error cleaning up failed sandbox \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.667586 containerd[1797]: time="2025-04-30T03:32:13.667555807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5dfm,Uid:4c8bd750-0601-46f1-814d-82809dd1a74f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.667892 kubelet[3359]: E0430 03:32:13.667856 3359 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.669310 kubelet[3359]: E0430 03:32:13.669150 3359 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f5dfm" Apr 30 03:32:13.669310 kubelet[3359]: E0430 03:32:13.669195 3359 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f5dfm" Apr 30 03:32:13.669310 kubelet[3359]: E0430 03:32:13.669263 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f5dfm_calico-system(4c8bd750-0601-46f1-814d-82809dd1a74f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f5dfm_calico-system(4c8bd750-0601-46f1-814d-82809dd1a74f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f5dfm" podUID="4c8bd750-0601-46f1-814d-82809dd1a74f" Apr 30 03:32:13.693580 containerd[1797]: time="2025-04-30T03:32:13.693456209Z" level=error msg="Failed to destroy network for sandbox \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.694192 containerd[1797]: time="2025-04-30T03:32:13.694146417Z" level=error msg="encountered an error cleaning up failed sandbox \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.694424 containerd[1797]: time="2025-04-30T03:32:13.694392720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69f9ffdfcc-4vn5q,Uid:72a78e6a-2103-489a-9bb3-6f815a567a66,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.695358 kubelet[3359]: E0430 03:32:13.695067 3359 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.695358 kubelet[3359]: E0430 03:32:13.695167 3359 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69f9ffdfcc-4vn5q" Apr 30 03:32:13.695358 kubelet[3359]: E0430 03:32:13.695195 3359 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69f9ffdfcc-4vn5q" Apr 30 03:32:13.697282 kubelet[3359]: E0430 03:32:13.695259 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69f9ffdfcc-4vn5q_calico-system(72a78e6a-2103-489a-9bb3-6f815a567a66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69f9ffdfcc-4vn5q_calico-system(72a78e6a-2103-489a-9bb3-6f815a567a66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69f9ffdfcc-4vn5q" podUID="72a78e6a-2103-489a-9bb3-6f815a567a66" Apr 30 03:32:13.699481 containerd[1797]: time="2025-04-30T03:32:13.698675970Z" level=error msg="Failed to destroy network for sandbox \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.701412 containerd[1797]: time="2025-04-30T03:32:13.701367301Z" level=error msg="encountered an error cleaning up failed sandbox \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.702738 containerd[1797]: time="2025-04-30T03:32:13.702698417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxch7,Uid:a2f13ad9-2be6-4a46-b67d-267afc984299,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.703008 kubelet[3359]: E0430 03:32:13.702970 3359 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.703092 kubelet[3359]: E0430 03:32:13.703037 3359 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wxch7" Apr 30 03:32:13.703092 kubelet[3359]: E0430 03:32:13.703069 3359 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wxch7" Apr 30 03:32:13.703187 kubelet[3359]: E0430 03:32:13.703127 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wxch7_kube-system(a2f13ad9-2be6-4a46-b67d-267afc984299)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wxch7_kube-system(a2f13ad9-2be6-4a46-b67d-267afc984299)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wxch7" podUID="a2f13ad9-2be6-4a46-b67d-267afc984299" Apr 30 03:32:13.709680 containerd[1797]: time="2025-04-30T03:32:13.709639798Z" level=error msg="Failed to destroy network for sandbox \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.710194 containerd[1797]: time="2025-04-30T03:32:13.710151304Z" level=error msg="encountered an error cleaning up failed sandbox \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.710352 containerd[1797]: time="2025-04-30T03:32:13.710323106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c4d69c-vf8m8,Uid:f183badf-71b7-4297-a2f8-acdc049a5567,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.710739 kubelet[3359]: E0430 03:32:13.710697 3359 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.710841 kubelet[3359]: E0430 03:32:13.710766 3359 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc8c4d69c-vf8m8" Apr 30 03:32:13.710841 kubelet[3359]: E0430 03:32:13.710792 3359 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc8c4d69c-vf8m8" Apr 30 03:32:13.712482 kubelet[3359]: E0430 03:32:13.711728 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc8c4d69c-vf8m8_calico-apiserver(f183badf-71b7-4297-a2f8-acdc049a5567)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc8c4d69c-vf8m8_calico-apiserver(f183badf-71b7-4297-a2f8-acdc049a5567)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc8c4d69c-vf8m8" podUID="f183badf-71b7-4297-a2f8-acdc049a5567" Apr 30 03:32:13.718373 containerd[1797]: time="2025-04-30T03:32:13.718326599Z" level=error msg="Failed to destroy network for sandbox \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.718978 containerd[1797]: time="2025-04-30T03:32:13.718931506Z" level=error msg="encountered an error cleaning up failed sandbox \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.719157 containerd[1797]: time="2025-04-30T03:32:13.719126209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c4d69c-f48sp,Uid:bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.719569 kubelet[3359]: E0430 03:32:13.719528 3359 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.719739 kubelet[3359]: E0430 03:32:13.719718 3359 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc8c4d69c-f48sp" Apr 30 03:32:13.719840 kubelet[3359]: E0430 03:32:13.719819 3359 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc8c4d69c-f48sp" Apr 30 03:32:13.720034 kubelet[3359]: E0430 03:32:13.719964 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc8c4d69c-f48sp_calico-apiserver(bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc8c4d69c-f48sp_calico-apiserver(bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc8c4d69c-f48sp" podUID="bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656" Apr 30 03:32:13.723323 containerd[1797]: time="2025-04-30T03:32:13.723283557Z" level=error msg="Failed to destroy network for sandbox \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.723646 containerd[1797]: time="2025-04-30T03:32:13.723606561Z" level=error msg="encountered an error cleaning up failed sandbox \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.723730 containerd[1797]: time="2025-04-30T03:32:13.723664161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2bpt,Uid:340e92ff-7ea8-4903-9227-eed397cdce47,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.724191 kubelet[3359]: E0430 03:32:13.723886 3359 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.724191 kubelet[3359]: E0430 03:32:13.723939 3359 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l2bpt" Apr 30 03:32:13.724191 kubelet[3359]: E0430 03:32:13.723965 3359 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l2bpt" Apr 30 03:32:13.724396 kubelet[3359]: E0430 03:32:13.724038 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-l2bpt_kube-system(340e92ff-7ea8-4903-9227-eed397cdce47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-l2bpt_kube-system(340e92ff-7ea8-4903-9227-eed397cdce47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-l2bpt" podUID="340e92ff-7ea8-4903-9227-eed397cdce47" Apr 30 03:32:13.810444 kubelet[3359]: I0430 03:32:13.810398 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:13.812605 containerd[1797]: time="2025-04-30T03:32:13.811783189Z" level=info msg="StopPodSandbox for \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\"" Apr 30 03:32:13.812605 containerd[1797]: time="2025-04-30T03:32:13.812036992Z" level=info msg="Ensure that sandbox 85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16 in task-service has been cleanup successfully" Apr 30 03:32:13.813847 kubelet[3359]: I0430 03:32:13.813815 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:13.814586 containerd[1797]: time="2025-04-30T03:32:13.814557221Z" level=info msg="StopPodSandbox for \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\"" Apr 30 03:32:13.814862 containerd[1797]: time="2025-04-30T03:32:13.814752124Z" level=info msg="Ensure that sandbox 83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64 in task-service has been cleanup successfully" Apr 30 03:32:13.816440 kubelet[3359]: I0430 03:32:13.816409 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:13.818394 containerd[1797]: time="2025-04-30T03:32:13.818325565Z" level=info msg="StopPodSandbox for \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\"" Apr 30 03:32:13.818656 containerd[1797]: time="2025-04-30T03:32:13.818632969Z" level=info msg="Ensure that sandbox 9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2 in task-service has been cleanup successfully" Apr 30 03:32:13.821394 kubelet[3359]: I0430 03:32:13.820773 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:13.825233 containerd[1797]: time="2025-04-30T03:32:13.825206146Z" level=info msg="StopPodSandbox for \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\"" Apr 30 03:32:13.826281 containerd[1797]: time="2025-04-30T03:32:13.826248558Z" level=info msg="Ensure that sandbox 340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867 in task-service has been cleanup successfully" Apr 30 03:32:13.829202 kubelet[3359]: I0430 03:32:13.829170 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:13.829874 containerd[1797]: time="2025-04-30T03:32:13.829803999Z" level=info msg="StopPodSandbox for \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\"" Apr 30 03:32:13.830197 containerd[1797]: time="2025-04-30T03:32:13.829983801Z" level=info msg="Ensure that sandbox a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98 in task-service has been cleanup successfully" Apr 30 03:32:13.835911 kubelet[3359]: I0430 03:32:13.835888 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:13.838482 containerd[1797]: time="2025-04-30T03:32:13.838421000Z" level=info msg="StopPodSandbox for \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\"" Apr 30 03:32:13.841814 containerd[1797]: time="2025-04-30T03:32:13.840940329Z" level=info msg="Ensure that sandbox 651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39 in task-service has been cleanup successfully" Apr 30 03:32:13.853213 containerd[1797]: time="2025-04-30T03:32:13.851018147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:32:13.928666 containerd[1797]: time="2025-04-30T03:32:13.927319936Z" level=error msg="StopPodSandbox for \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\" failed" error="failed to destroy network for sandbox \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.929705 kubelet[3359]: E0430 03:32:13.929650 3359 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:13.929846 kubelet[3359]: E0430 03:32:13.929732 3359 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16"} Apr 30 03:32:13.929846 kubelet[3359]: E0430 03:32:13.929831 3359 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:32:13.930014 kubelet[3359]: E0430 03:32:13.929866 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc8c4d69c-f48sp" podUID="bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656" Apr 30 03:32:13.943381 containerd[1797]: time="2025-04-30T03:32:13.943323123Z" level=error msg="StopPodSandbox for \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\" failed" error="failed to destroy network for sandbox \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.955265 kubelet[3359]: E0430 03:32:13.954820 3359 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:13.955265 kubelet[3359]: E0430 03:32:13.954873 3359 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64"} Apr 30 03:32:13.955265 kubelet[3359]: E0430 03:32:13.954918 3359 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"340e92ff-7ea8-4903-9227-eed397cdce47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:32:13.955265 kubelet[3359]: E0430 03:32:13.954957 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"340e92ff-7ea8-4903-9227-eed397cdce47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-l2bpt" podUID="340e92ff-7ea8-4903-9227-eed397cdce47" Apr 30 03:32:13.975874 containerd[1797]: time="2025-04-30T03:32:13.975816002Z" level=error msg="StopPodSandbox for \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\" failed" error="failed to destroy network for sandbox \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.976509 kubelet[3359]: E0430 03:32:13.976421 3359 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:13.976636 kubelet[3359]: E0430 03:32:13.976528 3359 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2"} Apr 30 03:32:13.976636 kubelet[3359]: E0430 03:32:13.976574 3359 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2f13ad9-2be6-4a46-b67d-267afc984299\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:32:13.976636 kubelet[3359]: E0430 03:32:13.976606 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2f13ad9-2be6-4a46-b67d-267afc984299\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wxch7" podUID="a2f13ad9-2be6-4a46-b67d-267afc984299" Apr 30 03:32:13.977353 containerd[1797]: time="2025-04-30T03:32:13.977181118Z" level=error msg="StopPodSandbox for \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\" failed" error="failed to destroy network for sandbox \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.977513 kubelet[3359]: E0430 03:32:13.977433 3359 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:13.977588 kubelet[3359]: E0430 03:32:13.977529 3359 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867"} Apr 30 03:32:13.977640 kubelet[3359]: E0430 03:32:13.977594 3359 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72a78e6a-2103-489a-9bb3-6f815a567a66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:32:13.977713 kubelet[3359]: E0430 03:32:13.977663 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72a78e6a-2103-489a-9bb3-6f815a567a66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69f9ffdfcc-4vn5q" podUID="72a78e6a-2103-489a-9bb3-6f815a567a66" Apr 30 03:32:13.988934 containerd[1797]: time="2025-04-30T03:32:13.988702652Z" level=error msg="StopPodSandbox for \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\" failed" error="failed to destroy network for sandbox \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.989079 kubelet[3359]: E0430 03:32:13.989005 3359 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:13.989079 kubelet[3359]: E0430 03:32:13.989060 3359 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98"} Apr 30 03:32:13.990341 kubelet[3359]: E0430 03:32:13.989201 3359 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c8bd750-0601-46f1-814d-82809dd1a74f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:32:13.990341 kubelet[3359]: E0430 03:32:13.989239 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c8bd750-0601-46f1-814d-82809dd1a74f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f5dfm" podUID="4c8bd750-0601-46f1-814d-82809dd1a74f" Apr 30 03:32:13.991961 containerd[1797]: time="2025-04-30T03:32:13.991918590Z" level=error msg="StopPodSandbox for \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\" failed" error="failed to destroy network for sandbox \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:32:13.992176 kubelet[3359]: E0430 03:32:13.992142 3359 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:13.992259 kubelet[3359]: E0430 03:32:13.992186 3359 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39"} Apr 30 03:32:13.992259 kubelet[3359]: E0430 03:32:13.992229 3359 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f183badf-71b7-4297-a2f8-acdc049a5567\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:32:13.992366 kubelet[3359]: E0430 03:32:13.992259 3359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f183badf-71b7-4297-a2f8-acdc049a5567\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc8c4d69c-vf8m8" podUID="f183badf-71b7-4297-a2f8-acdc049a5567" Apr 30 03:32:14.396600 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64-shm.mount: Deactivated successfully. Apr 30 03:32:14.396772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867-shm.mount: Deactivated successfully. Apr 30 03:32:14.396902 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39-shm.mount: Deactivated successfully. Apr 30 03:32:14.397020 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98-shm.mount: Deactivated successfully. Apr 30 03:32:19.657112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4098197606.mount: Deactivated successfully. Apr 30 03:32:19.724491 containerd[1797]: time="2025-04-30T03:32:19.724423068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:19.727350 containerd[1797]: time="2025-04-30T03:32:19.727278602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:32:19.730998 containerd[1797]: time="2025-04-30T03:32:19.730963245Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:19.735107 containerd[1797]: time="2025-04-30T03:32:19.735040294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:19.735641 containerd[1797]: time="2025-04-30T03:32:19.735599900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 5.884533253s" Apr 30 03:32:19.735869 containerd[1797]: time="2025-04-30T03:32:19.735750602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:32:19.748852 containerd[1797]: time="2025-04-30T03:32:19.748813256Z" level=info msg="CreateContainer within sandbox \"21441fca554f2609310317ee8a58e4f7eba2836419925a183bfc21b1b398877b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:32:19.793019 containerd[1797]: time="2025-04-30T03:32:19.792970177Z" level=info msg="CreateContainer within sandbox \"21441fca554f2609310317ee8a58e4f7eba2836419925a183bfc21b1b398877b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"15aa1924b21c515d3f4e9a2be387ee66ea071f86f1086fda2d4ed39affad2266\"" Apr 30 03:32:19.793690 containerd[1797]: time="2025-04-30T03:32:19.793656585Z" level=info msg="StartContainer for \"15aa1924b21c515d3f4e9a2be387ee66ea071f86f1086fda2d4ed39affad2266\"" Apr 30 03:32:19.862688 containerd[1797]: time="2025-04-30T03:32:19.860384173Z" level=info msg="StartContainer for \"15aa1924b21c515d3f4e9a2be387ee66ea071f86f1086fda2d4ed39affad2266\" returns successfully" Apr 30 03:32:19.914765 kubelet[3359]: I0430 03:32:19.914575 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2gkv6" podStartSLOduration=0.974260294 podStartE2EDuration="20.914549412s" podCreationTimestamp="2025-04-30 03:31:59 +0000 UTC" firstStartedPulling="2025-04-30 03:31:59.796352194 +0000 UTC m=+22.783610948" lastFinishedPulling="2025-04-30 03:32:19.736641312 +0000 UTC m=+42.723900066" observedRunningTime="2025-04-30 03:32:19.909539753 +0000 UTC m=+42.896798507" watchObservedRunningTime="2025-04-30 03:32:19.914549412 +0000 UTC m=+42.901808066" Apr 30 03:32:20.117507 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:32:20.117654 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:32:21.776499 kernel: bpftool[4688]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:32:22.055391 systemd-networkd[1365]: vxlan.calico: Link UP Apr 30 03:32:22.056678 systemd-networkd[1365]: vxlan.calico: Gained carrier Apr 30 03:32:23.319748 systemd-networkd[1365]: vxlan.calico: Gained IPv6LL Apr 30 03:32:26.658403 containerd[1797]: time="2025-04-30T03:32:26.658332070Z" level=info msg="StopPodSandbox for \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\"" Apr 30 03:32:26.661015 containerd[1797]: time="2025-04-30T03:32:26.658946977Z" level=info msg="StopPodSandbox for \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\"" Apr 30 03:32:26.781095 containerd[1797]: 2025-04-30 03:32:26.737 [INFO][4794] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:26.781095 containerd[1797]: 2025-04-30 03:32:26.737 [INFO][4794] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" iface="eth0" netns="/var/run/netns/cni-f6c9e299-4919-855e-c300-78eb53cea8bf" Apr 30 03:32:26.781095 containerd[1797]: 2025-04-30 03:32:26.738 [INFO][4794] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" iface="eth0" netns="/var/run/netns/cni-f6c9e299-4919-855e-c300-78eb53cea8bf" Apr 30 03:32:26.781095 containerd[1797]: 2025-04-30 03:32:26.738 [INFO][4794] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" iface="eth0" netns="/var/run/netns/cni-f6c9e299-4919-855e-c300-78eb53cea8bf" Apr 30 03:32:26.781095 containerd[1797]: 2025-04-30 03:32:26.738 [INFO][4794] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:26.781095 containerd[1797]: 2025-04-30 03:32:26.738 [INFO][4794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:26.781095 containerd[1797]: 2025-04-30 03:32:26.767 [INFO][4810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" HandleID="k8s-pod-network.a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Workload="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:26.781095 containerd[1797]: 2025-04-30 03:32:26.767 [INFO][4810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:26.781095 containerd[1797]: 2025-04-30 03:32:26.767 [INFO][4810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:26.781095 containerd[1797]: 2025-04-30 03:32:26.773 [WARNING][4810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" HandleID="k8s-pod-network.a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Workload="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:26.781095 containerd[1797]: 2025-04-30 03:32:26.773 [INFO][4810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" HandleID="k8s-pod-network.a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Workload="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:26.781095 containerd[1797]: 2025-04-30 03:32:26.776 [INFO][4810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:26.781095 containerd[1797]: 2025-04-30 03:32:26.779 [INFO][4794] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:26.784130 containerd[1797]: time="2025-04-30T03:32:26.783588221Z" level=info msg="TearDown network for sandbox \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\" successfully" Apr 30 03:32:26.784130 containerd[1797]: time="2025-04-30T03:32:26.783627821Z" level=info msg="StopPodSandbox for \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\" returns successfully" Apr 30 03:32:26.787278 systemd[1]: run-netns-cni\x2df6c9e299\x2d4919\x2d855e\x2dc300\x2d78eb53cea8bf.mount: Deactivated successfully. Apr 30 03:32:26.788775 containerd[1797]: time="2025-04-30T03:32:26.788076673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5dfm,Uid:4c8bd750-0601-46f1-814d-82809dd1a74f,Namespace:calico-system,Attempt:1,}" Apr 30 03:32:26.794818 containerd[1797]: 2025-04-30 03:32:26.732 [INFO][4784] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:26.794818 containerd[1797]: 2025-04-30 03:32:26.732 [INFO][4784] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" iface="eth0" netns="/var/run/netns/cni-c0d50e98-b51e-5dec-20aa-07951c58fec4" Apr 30 03:32:26.794818 containerd[1797]: 2025-04-30 03:32:26.732 [INFO][4784] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" iface="eth0" netns="/var/run/netns/cni-c0d50e98-b51e-5dec-20aa-07951c58fec4" Apr 30 03:32:26.794818 containerd[1797]: 2025-04-30 03:32:26.733 [INFO][4784] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" iface="eth0" netns="/var/run/netns/cni-c0d50e98-b51e-5dec-20aa-07951c58fec4" Apr 30 03:32:26.794818 containerd[1797]: 2025-04-30 03:32:26.733 [INFO][4784] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:26.794818 containerd[1797]: 2025-04-30 03:32:26.733 [INFO][4784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:26.794818 containerd[1797]: 2025-04-30 03:32:26.767 [INFO][4805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" HandleID="k8s-pod-network.85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:26.794818 containerd[1797]: 2025-04-30 03:32:26.767 [INFO][4805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:26.794818 containerd[1797]: 2025-04-30 03:32:26.776 [INFO][4805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:26.794818 containerd[1797]: 2025-04-30 03:32:26.791 [WARNING][4805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" HandleID="k8s-pod-network.85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:26.794818 containerd[1797]: 2025-04-30 03:32:26.791 [INFO][4805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" HandleID="k8s-pod-network.85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:26.794818 containerd[1797]: 2025-04-30 03:32:26.792 [INFO][4805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:26.794818 containerd[1797]: 2025-04-30 03:32:26.793 [INFO][4784] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:26.798240 containerd[1797]: time="2025-04-30T03:32:26.795183255Z" level=info msg="TearDown network for sandbox \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\" successfully" Apr 30 03:32:26.798240 containerd[1797]: time="2025-04-30T03:32:26.795221056Z" level=info msg="StopPodSandbox for \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\" returns successfully" Apr 30 03:32:26.798240 containerd[1797]: time="2025-04-30T03:32:26.797745885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c4d69c-f48sp,Uid:bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:32:26.799770 systemd[1]: run-netns-cni\x2dc0d50e98\x2db51e\x2d5dec\x2d20aa\x2d07951c58fec4.mount: Deactivated successfully. Apr 30 03:32:26.997511 systemd-networkd[1365]: caliaa2b857b196: Link UP Apr 30 03:32:26.997764 systemd-networkd[1365]: caliaa2b857b196: Gained carrier Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.909 [INFO][4822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0 calico-apiserver-6cc8c4d69c- calico-apiserver bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656 760 0 2025-04-30 03:31:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cc8c4d69c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-6f0285bad0 calico-apiserver-6cc8c4d69c-f48sp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaa2b857b196 [] []}} ContainerID="6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-f48sp" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-" Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.909 [INFO][4822] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-f48sp" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.949 [INFO][4844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" HandleID="k8s-pod-network.6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.958 [INFO][4844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" HandleID="k8s-pod-network.6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000392be0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-6f0285bad0", "pod":"calico-apiserver-6cc8c4d69c-f48sp", "timestamp":"2025-04-30 03:32:26.949224739 +0000 UTC"}, Hostname:"ci-4081.3.3-a-6f0285bad0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.959 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.959 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.959 [INFO][4844] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-6f0285bad0' Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.961 [INFO][4844] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.964 [INFO][4844] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.967 [INFO][4844] ipam/ipam.go 489: Trying affinity for 192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.968 [INFO][4844] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.970 [INFO][4844] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.970 [INFO][4844] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.971 [INFO][4844] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681 Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.978 [INFO][4844] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.984 [INFO][4844] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.65/26] block=192.168.21.64/26 handle="k8s-pod-network.6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.984 [INFO][4844] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.65/26] handle="k8s-pod-network.6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.984 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:27.019695 containerd[1797]: 2025-04-30 03:32:26.984 [INFO][4844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.65/26] IPv6=[] ContainerID="6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" HandleID="k8s-pod-network.6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:27.024821 containerd[1797]: 2025-04-30 03:32:26.987 [INFO][4822] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-f48sp" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0", GenerateName:"calico-apiserver-6cc8c4d69c-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c4d69c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"", Pod:"calico-apiserver-6cc8c4d69c-f48sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa2b857b196", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:27.024821 containerd[1797]: 2025-04-30 03:32:26.988 [INFO][4822] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.65/32] ContainerID="6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-f48sp" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:27.024821 containerd[1797]: 2025-04-30 03:32:26.988 [INFO][4822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa2b857b196 ContainerID="6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-f48sp" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:27.024821 containerd[1797]: 2025-04-30 03:32:26.990 [INFO][4822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-f48sp" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:27.024821 containerd[1797]: 2025-04-30 03:32:26.990 [INFO][4822] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-f48sp" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0", GenerateName:"calico-apiserver-6cc8c4d69c-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c4d69c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681", Pod:"calico-apiserver-6cc8c4d69c-f48sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa2b857b196", MAC:"b6:cc:5e:bf:7a:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:27.024821 containerd[1797]: 2025-04-30 03:32:27.016 [INFO][4822] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-f48sp" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:27.063251 systemd-networkd[1365]: calib406e0d4bbf: Link UP Apr 30 03:32:27.064717 systemd-networkd[1365]: calib406e0d4bbf: Gained carrier Apr 30 03:32:27.083618 containerd[1797]: time="2025-04-30T03:32:27.079682550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:27.083618 containerd[1797]: time="2025-04-30T03:32:27.079761651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:27.083618 containerd[1797]: time="2025-04-30T03:32:27.079779251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:27.083618 containerd[1797]: time="2025-04-30T03:32:27.080032554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:26.907 [INFO][4818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0 csi-node-driver- calico-system 4c8bd750-0601-46f1-814d-82809dd1a74f 761 0 2025-04-30 03:31:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-a-6f0285bad0 csi-node-driver-f5dfm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib406e0d4bbf [] []}} ContainerID="b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" Namespace="calico-system" Pod="csi-node-driver-f5dfm" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-" Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:26.908 [INFO][4818] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" Namespace="calico-system" Pod="csi-node-driver-f5dfm" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:26.951 [INFO][4843] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" HandleID="k8s-pod-network.b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" Workload="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:26.961 [INFO][4843] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" HandleID="k8s-pod-network.b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" Workload="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031bad0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-6f0285bad0", "pod":"csi-node-driver-f5dfm", "timestamp":"2025-04-30 03:32:26.95186657 +0000 UTC"}, Hostname:"ci-4081.3.3-a-6f0285bad0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:26.961 [INFO][4843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:26.984 [INFO][4843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:26.984 [INFO][4843] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-6f0285bad0' Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:26.987 [INFO][4843] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:27.003 [INFO][4843] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:27.020 [INFO][4843] ipam/ipam.go 489: Trying affinity for 192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:27.022 [INFO][4843] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:27.026 [INFO][4843] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:27.026 [INFO][4843] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:27.029 [INFO][4843] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892 Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:27.041 [INFO][4843] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:27.050 [INFO][4843] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.66/26] block=192.168.21.64/26 handle="k8s-pod-network.b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:27.050 [INFO][4843] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.66/26] handle="k8s-pod-network.b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:27.050 [INFO][4843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:27.102792 containerd[1797]: 2025-04-30 03:32:27.050 [INFO][4843] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.66/26] IPv6=[] ContainerID="b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" HandleID="k8s-pod-network.b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" Workload="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:27.106246 containerd[1797]: 2025-04-30 03:32:27.054 [INFO][4818] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" Namespace="calico-system" Pod="csi-node-driver-f5dfm" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c8bd750-0601-46f1-814d-82809dd1a74f", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"", Pod:"csi-node-driver-f5dfm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib406e0d4bbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:27.106246 containerd[1797]: 2025-04-30 03:32:27.054 [INFO][4818] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.66/32] ContainerID="b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" Namespace="calico-system" Pod="csi-node-driver-f5dfm" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:27.106246 containerd[1797]: 2025-04-30 03:32:27.054 [INFO][4818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib406e0d4bbf ContainerID="b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" Namespace="calico-system" Pod="csi-node-driver-f5dfm" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:27.106246 containerd[1797]: 2025-04-30 03:32:27.065 [INFO][4818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" Namespace="calico-system" Pod="csi-node-driver-f5dfm" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:27.106246 containerd[1797]: 2025-04-30 03:32:27.066 [INFO][4818] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" Namespace="calico-system" Pod="csi-node-driver-f5dfm" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c8bd750-0601-46f1-814d-82809dd1a74f", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892", Pod:"csi-node-driver-f5dfm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib406e0d4bbf", MAC:"02:a8:34:ec:7d:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:27.106246 containerd[1797]: 2025-04-30 03:32:27.097 [INFO][4818] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892" Namespace="calico-system" Pod="csi-node-driver-f5dfm" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:27.148122 containerd[1797]: time="2025-04-30T03:32:27.146537325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:27.148122 containerd[1797]: time="2025-04-30T03:32:27.146607926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:27.148122 containerd[1797]: time="2025-04-30T03:32:27.146629926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:27.148122 containerd[1797]: time="2025-04-30T03:32:27.146723827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:27.175199 containerd[1797]: time="2025-04-30T03:32:27.175152356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c4d69c-f48sp,Uid:bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681\"" Apr 30 03:32:27.178187 containerd[1797]: time="2025-04-30T03:32:27.178149291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:32:27.201007 containerd[1797]: time="2025-04-30T03:32:27.200960555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5dfm,Uid:4c8bd750-0601-46f1-814d-82809dd1a74f,Namespace:calico-system,Attempt:1,} returns sandbox id \"b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892\"" Apr 30 03:32:27.658488 containerd[1797]: time="2025-04-30T03:32:27.658426654Z" level=info msg="StopPodSandbox for \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\"" Apr 30 03:32:27.662172 containerd[1797]: time="2025-04-30T03:32:27.660508778Z" level=info msg="StopPodSandbox for \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\"" Apr 30 03:32:27.876399 containerd[1797]: 2025-04-30 03:32:27.750 [INFO][4996] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:27.876399 containerd[1797]: 2025-04-30 03:32:27.750 [INFO][4996] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" iface="eth0" netns="/var/run/netns/cni-d6eab1d6-46d6-f309-7282-79c1b930e564" Apr 30 03:32:27.876399 containerd[1797]: 2025-04-30 03:32:27.751 [INFO][4996] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" iface="eth0" netns="/var/run/netns/cni-d6eab1d6-46d6-f309-7282-79c1b930e564" Apr 30 03:32:27.876399 containerd[1797]: 2025-04-30 03:32:27.752 [INFO][4996] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" iface="eth0" netns="/var/run/netns/cni-d6eab1d6-46d6-f309-7282-79c1b930e564" Apr 30 03:32:27.876399 containerd[1797]: 2025-04-30 03:32:27.752 [INFO][4996] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:27.876399 containerd[1797]: 2025-04-30 03:32:27.752 [INFO][4996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:27.876399 containerd[1797]: 2025-04-30 03:32:27.863 [INFO][5009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" HandleID="k8s-pod-network.651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:27.876399 containerd[1797]: 2025-04-30 03:32:27.863 [INFO][5009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:27.876399 containerd[1797]: 2025-04-30 03:32:27.864 [INFO][5009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:27.876399 containerd[1797]: 2025-04-30 03:32:27.871 [WARNING][5009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" HandleID="k8s-pod-network.651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:27.876399 containerd[1797]: 2025-04-30 03:32:27.871 [INFO][5009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" HandleID="k8s-pod-network.651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:27.876399 containerd[1797]: 2025-04-30 03:32:27.873 [INFO][5009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:27.876399 containerd[1797]: 2025-04-30 03:32:27.874 [INFO][4996] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:27.877976 containerd[1797]: time="2025-04-30T03:32:27.876935985Z" level=info msg="TearDown network for sandbox \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\" successfully" Apr 30 03:32:27.877976 containerd[1797]: time="2025-04-30T03:32:27.877082386Z" level=info msg="StopPodSandbox for \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\" returns successfully" Apr 30 03:32:27.879928 containerd[1797]: time="2025-04-30T03:32:27.879885519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c4d69c-vf8m8,Uid:f183badf-71b7-4297-a2f8-acdc049a5567,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:32:27.888214 systemd[1]: run-netns-cni\x2dd6eab1d6\x2d46d6\x2df309\x2d7282\x2d79c1b930e564.mount: Deactivated successfully. Apr 30 03:32:27.904819 containerd[1797]: 2025-04-30 03:32:27.746 [INFO][4997] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:27.904819 containerd[1797]: 2025-04-30 03:32:27.750 [INFO][4997] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" iface="eth0" netns="/var/run/netns/cni-5fb2af9d-658e-cdd3-b081-09b6b8a313cf" Apr 30 03:32:27.904819 containerd[1797]: 2025-04-30 03:32:27.755 [INFO][4997] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" iface="eth0" netns="/var/run/netns/cni-5fb2af9d-658e-cdd3-b081-09b6b8a313cf" Apr 30 03:32:27.904819 containerd[1797]: 2025-04-30 03:32:27.755 [INFO][4997] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" iface="eth0" netns="/var/run/netns/cni-5fb2af9d-658e-cdd3-b081-09b6b8a313cf" Apr 30 03:32:27.904819 containerd[1797]: 2025-04-30 03:32:27.755 [INFO][4997] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:27.904819 containerd[1797]: 2025-04-30 03:32:27.755 [INFO][4997] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:27.904819 containerd[1797]: 2025-04-30 03:32:27.869 [INFO][5011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" HandleID="k8s-pod-network.340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:27.904819 containerd[1797]: 2025-04-30 03:32:27.869 [INFO][5011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:27.904819 containerd[1797]: 2025-04-30 03:32:27.873 [INFO][5011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:27.904819 containerd[1797]: 2025-04-30 03:32:27.890 [WARNING][5011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" HandleID="k8s-pod-network.340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:27.904819 containerd[1797]: 2025-04-30 03:32:27.890 [INFO][5011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" HandleID="k8s-pod-network.340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:27.904819 containerd[1797]: 2025-04-30 03:32:27.893 [INFO][5011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:27.904819 containerd[1797]: 2025-04-30 03:32:27.897 [INFO][4997] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:27.906644 containerd[1797]: time="2025-04-30T03:32:27.906602928Z" level=info msg="TearDown network for sandbox \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\" successfully" Apr 30 03:32:27.906878 containerd[1797]: time="2025-04-30T03:32:27.906827431Z" level=info msg="StopPodSandbox for \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\" returns successfully" Apr 30 03:32:27.908124 containerd[1797]: time="2025-04-30T03:32:27.908088746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69f9ffdfcc-4vn5q,Uid:72a78e6a-2103-489a-9bb3-6f815a567a66,Namespace:calico-system,Attempt:1,}" Apr 30 03:32:27.914306 systemd[1]: run-netns-cni\x2d5fb2af9d\x2d658e\x2dcdd3\x2db081\x2d09b6b8a313cf.mount: Deactivated successfully. Apr 30 03:32:28.076849 systemd-networkd[1365]: cali36cbb290ab9: Link UP Apr 30 03:32:28.077105 systemd-networkd[1365]: cali36cbb290ab9: Gained carrier Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:27.977 [INFO][5022] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0 calico-apiserver-6cc8c4d69c- calico-apiserver f183badf-71b7-4297-a2f8-acdc049a5567 773 0 2025-04-30 03:31:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cc8c4d69c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-6f0285bad0 calico-apiserver-6cc8c4d69c-vf8m8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali36cbb290ab9 [] []}} ContainerID="634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-vf8m8" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-" Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:27.978 [INFO][5022] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-vf8m8" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.021 [INFO][5043] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" HandleID="k8s-pod-network.634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.032 [INFO][5043] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" HandleID="k8s-pod-network.634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003052e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-6f0285bad0", "pod":"calico-apiserver-6cc8c4d69c-vf8m8", "timestamp":"2025-04-30 03:32:28.021808063 +0000 UTC"}, Hostname:"ci-4081.3.3-a-6f0285bad0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.033 [INFO][5043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.033 [INFO][5043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.033 [INFO][5043] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-6f0285bad0' Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.034 [INFO][5043] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.038 [INFO][5043] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.043 [INFO][5043] ipam/ipam.go 489: Trying affinity for 192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.044 [INFO][5043] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.047 [INFO][5043] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.047 [INFO][5043] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.048 [INFO][5043] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.053 [INFO][5043] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.068 [INFO][5043] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.67/26] block=192.168.21.64/26 handle="k8s-pod-network.634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.068 [INFO][5043] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.67/26] handle="k8s-pod-network.634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.068 [INFO][5043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:28.105402 containerd[1797]: 2025-04-30 03:32:28.068 [INFO][5043] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.67/26] IPv6=[] ContainerID="634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" HandleID="k8s-pod-network.634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:28.107674 containerd[1797]: 2025-04-30 03:32:28.071 [INFO][5022] cni-plugin/k8s.go 386: Populated endpoint ContainerID="634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-vf8m8" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0", GenerateName:"calico-apiserver-6cc8c4d69c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f183badf-71b7-4297-a2f8-acdc049a5567", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c4d69c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"", Pod:"calico-apiserver-6cc8c4d69c-vf8m8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36cbb290ab9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:28.107674 containerd[1797]: 2025-04-30 03:32:28.071 [INFO][5022] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.67/32] ContainerID="634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-vf8m8" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:28.107674 containerd[1797]: 2025-04-30 03:32:28.071 [INFO][5022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36cbb290ab9 ContainerID="634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-vf8m8" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:28.107674 containerd[1797]: 2025-04-30 03:32:28.078 [INFO][5022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-vf8m8" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:28.107674 containerd[1797]: 2025-04-30 03:32:28.080 [INFO][5022] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-vf8m8" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0", GenerateName:"calico-apiserver-6cc8c4d69c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f183badf-71b7-4297-a2f8-acdc049a5567", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c4d69c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e", Pod:"calico-apiserver-6cc8c4d69c-vf8m8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36cbb290ab9", MAC:"16:44:c1:0b:75:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:28.107674 containerd[1797]: 2025-04-30 03:32:28.102 [INFO][5022] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c4d69c-vf8m8" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:28.151760 containerd[1797]: time="2025-04-30T03:32:28.149909447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:28.151760 containerd[1797]: time="2025-04-30T03:32:28.151457864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:28.151760 containerd[1797]: time="2025-04-30T03:32:28.151488565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:28.152274 containerd[1797]: time="2025-04-30T03:32:28.152134772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:28.155555 systemd-networkd[1365]: cali99fd91f4ec9: Link UP Apr 30 03:32:28.156914 systemd-networkd[1365]: cali99fd91f4ec9: Gained carrier Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.033 [INFO][5033] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0 calico-kube-controllers-69f9ffdfcc- calico-system 72a78e6a-2103-489a-9bb3-6f815a567a66 772 0 2025-04-30 03:31:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69f9ffdfcc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-a-6f0285bad0 calico-kube-controllers-69f9ffdfcc-4vn5q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali99fd91f4ec9 [] []}} ContainerID="27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" Namespace="calico-system" Pod="calico-kube-controllers-69f9ffdfcc-4vn5q" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-" Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.033 [INFO][5033] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" Namespace="calico-system" Pod="calico-kube-controllers-69f9ffdfcc-4vn5q" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.072 [INFO][5053] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" HandleID="k8s-pod-network.27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.092 [INFO][5053] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" HandleID="k8s-pod-network.27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031bd50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-6f0285bad0", "pod":"calico-kube-controllers-69f9ffdfcc-4vn5q", "timestamp":"2025-04-30 03:32:28.072096845 +0000 UTC"}, Hostname:"ci-4081.3.3-a-6f0285bad0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.093 [INFO][5053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.093 [INFO][5053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.093 [INFO][5053] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-6f0285bad0' Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.100 [INFO][5053] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.107 [INFO][5053] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.115 [INFO][5053] ipam/ipam.go 489: Trying affinity for 192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.119 [INFO][5053] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.122 [INFO][5053] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.123 [INFO][5053] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.124 [INFO][5053] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83 Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.132 [INFO][5053] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.149 [INFO][5053] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.68/26] block=192.168.21.64/26 handle="k8s-pod-network.27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.149 [INFO][5053] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.68/26] handle="k8s-pod-network.27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.149 [INFO][5053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:28.180282 containerd[1797]: 2025-04-30 03:32:28.149 [INFO][5053] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.68/26] IPv6=[] ContainerID="27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" HandleID="k8s-pod-network.27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:28.185689 containerd[1797]: 2025-04-30 03:32:28.152 [INFO][5033] cni-plugin/k8s.go 386: Populated endpoint ContainerID="27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" Namespace="calico-system" Pod="calico-kube-controllers-69f9ffdfcc-4vn5q" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0", GenerateName:"calico-kube-controllers-69f9ffdfcc-", Namespace:"calico-system", SelfLink:"", UID:"72a78e6a-2103-489a-9bb3-6f815a567a66", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69f9ffdfcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"", Pod:"calico-kube-controllers-69f9ffdfcc-4vn5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99fd91f4ec9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:28.185689 containerd[1797]: 2025-04-30 03:32:28.152 [INFO][5033] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.68/32] ContainerID="27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" Namespace="calico-system" Pod="calico-kube-controllers-69f9ffdfcc-4vn5q" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:28.185689 containerd[1797]: 2025-04-30 03:32:28.152 [INFO][5033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99fd91f4ec9 ContainerID="27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" Namespace="calico-system" Pod="calico-kube-controllers-69f9ffdfcc-4vn5q" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:28.185689 containerd[1797]: 2025-04-30 03:32:28.157 [INFO][5033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" Namespace="calico-system" Pod="calico-kube-controllers-69f9ffdfcc-4vn5q" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:28.185689 containerd[1797]: 2025-04-30 03:32:28.157 [INFO][5033] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" Namespace="calico-system" Pod="calico-kube-controllers-69f9ffdfcc-4vn5q" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0", GenerateName:"calico-kube-controllers-69f9ffdfcc-", Namespace:"calico-system", SelfLink:"", UID:"72a78e6a-2103-489a-9bb3-6f815a567a66", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69f9ffdfcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83", Pod:"calico-kube-controllers-69f9ffdfcc-4vn5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99fd91f4ec9", MAC:"66:95:d4:8d:d1:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:28.185689 containerd[1797]: 2025-04-30 03:32:28.178 [INFO][5033] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83" Namespace="calico-system" Pod="calico-kube-controllers-69f9ffdfcc-4vn5q" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:28.226998 containerd[1797]: time="2025-04-30T03:32:28.226803037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:28.228194 containerd[1797]: time="2025-04-30T03:32:28.228108552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:28.228488 containerd[1797]: time="2025-04-30T03:32:28.228179353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:28.228815 containerd[1797]: time="2025-04-30T03:32:28.228738260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:28.263604 containerd[1797]: time="2025-04-30T03:32:28.263556563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c4d69c-vf8m8,Uid:f183badf-71b7-4297-a2f8-acdc049a5567,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e\"" Apr 30 03:32:28.297852 containerd[1797]: time="2025-04-30T03:32:28.297804860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69f9ffdfcc-4vn5q,Uid:72a78e6a-2103-489a-9bb3-6f815a567a66,Namespace:calico-system,Attempt:1,} returns sandbox id \"27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83\"" Apr 30 03:32:28.439626 systemd-networkd[1365]: calib406e0d4bbf: Gained IPv6LL Apr 30 03:32:28.667920 containerd[1797]: time="2025-04-30T03:32:28.667679444Z" level=info msg="StopPodSandbox for \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\"" Apr 30 03:32:28.678055 containerd[1797]: time="2025-04-30T03:32:28.677560858Z" level=info msg="StopPodSandbox for \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\"" Apr 30 03:32:28.841623 containerd[1797]: 2025-04-30 03:32:28.764 [INFO][5191] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:28.841623 containerd[1797]: 2025-04-30 03:32:28.765 [INFO][5191] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" iface="eth0" netns="/var/run/netns/cni-5e079f3e-50a3-e7d0-9a1b-f7d90093fab3" Apr 30 03:32:28.841623 containerd[1797]: 2025-04-30 03:32:28.766 [INFO][5191] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" iface="eth0" netns="/var/run/netns/cni-5e079f3e-50a3-e7d0-9a1b-f7d90093fab3" Apr 30 03:32:28.841623 containerd[1797]: 2025-04-30 03:32:28.766 [INFO][5191] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" iface="eth0" netns="/var/run/netns/cni-5e079f3e-50a3-e7d0-9a1b-f7d90093fab3" Apr 30 03:32:28.841623 containerd[1797]: 2025-04-30 03:32:28.766 [INFO][5191] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:28.841623 containerd[1797]: 2025-04-30 03:32:28.766 [INFO][5191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:28.841623 containerd[1797]: 2025-04-30 03:32:28.814 [INFO][5209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" HandleID="k8s-pod-network.83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:28.841623 containerd[1797]: 2025-04-30 03:32:28.814 [INFO][5209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:28.841623 containerd[1797]: 2025-04-30 03:32:28.814 [INFO][5209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:28.841623 containerd[1797]: 2025-04-30 03:32:28.834 [WARNING][5209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" HandleID="k8s-pod-network.83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:28.841623 containerd[1797]: 2025-04-30 03:32:28.834 [INFO][5209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" HandleID="k8s-pod-network.83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:28.841623 containerd[1797]: 2025-04-30 03:32:28.836 [INFO][5209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:28.841623 containerd[1797]: 2025-04-30 03:32:28.837 [INFO][5191] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:28.846558 containerd[1797]: time="2025-04-30T03:32:28.842608470Z" level=info msg="TearDown network for sandbox \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\" successfully" Apr 30 03:32:28.846558 containerd[1797]: time="2025-04-30T03:32:28.842654070Z" level=info msg="StopPodSandbox for \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\" returns successfully" Apr 30 03:32:28.848490 containerd[1797]: time="2025-04-30T03:32:28.847918231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2bpt,Uid:340e92ff-7ea8-4903-9227-eed397cdce47,Namespace:kube-system,Attempt:1,}" Apr 30 03:32:28.849349 systemd[1]: run-netns-cni\x2d5e079f3e\x2d50a3\x2de7d0\x2d9a1b\x2df7d90093fab3.mount: Deactivated successfully. Apr 30 03:32:28.855732 containerd[1797]: 2025-04-30 03:32:28.778 [INFO][5198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:28.855732 containerd[1797]: 2025-04-30 03:32:28.779 [INFO][5198] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" iface="eth0" netns="/var/run/netns/cni-d9902798-ea89-bd4b-a72a-008d45f0f0da" Apr 30 03:32:28.855732 containerd[1797]: 2025-04-30 03:32:28.779 [INFO][5198] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" iface="eth0" netns="/var/run/netns/cni-d9902798-ea89-bd4b-a72a-008d45f0f0da" Apr 30 03:32:28.855732 containerd[1797]: 2025-04-30 03:32:28.779 [INFO][5198] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" iface="eth0" netns="/var/run/netns/cni-d9902798-ea89-bd4b-a72a-008d45f0f0da" Apr 30 03:32:28.855732 containerd[1797]: 2025-04-30 03:32:28.779 [INFO][5198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:28.855732 containerd[1797]: 2025-04-30 03:32:28.779 [INFO][5198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:28.855732 containerd[1797]: 2025-04-30 03:32:28.835 [INFO][5214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" HandleID="k8s-pod-network.9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:28.855732 containerd[1797]: 2025-04-30 03:32:28.835 [INFO][5214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:28.855732 containerd[1797]: 2025-04-30 03:32:28.838 [INFO][5214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:28.855732 containerd[1797]: 2025-04-30 03:32:28.848 [WARNING][5214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" HandleID="k8s-pod-network.9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:28.855732 containerd[1797]: 2025-04-30 03:32:28.848 [INFO][5214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" HandleID="k8s-pod-network.9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:28.855732 containerd[1797]: 2025-04-30 03:32:28.852 [INFO][5214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:28.855732 containerd[1797]: 2025-04-30 03:32:28.853 [INFO][5198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:28.856361 containerd[1797]: time="2025-04-30T03:32:28.856330529Z" level=info msg="TearDown network for sandbox \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\" successfully" Apr 30 03:32:28.856452 containerd[1797]: time="2025-04-30T03:32:28.856437830Z" level=info msg="StopPodSandbox for \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\" returns successfully" Apr 30 03:32:28.857425 containerd[1797]: time="2025-04-30T03:32:28.857399841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxch7,Uid:a2f13ad9-2be6-4a46-b67d-267afc984299,Namespace:kube-system,Attempt:1,}" Apr 30 03:32:28.861522 systemd[1]: run-netns-cni\x2dd9902798\x2dea89\x2dbd4b\x2da72a\x2d008d45f0f0da.mount: Deactivated successfully. Apr 30 03:32:28.954703 systemd-networkd[1365]: caliaa2b857b196: Gained IPv6LL Apr 30 03:32:29.084233 systemd-networkd[1365]: cali1a865647175: Link UP Apr 30 03:32:29.084731 systemd-networkd[1365]: cali1a865647175: Gained carrier Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:28.964 [INFO][5223] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0 coredns-7db6d8ff4d- kube-system 340e92ff-7ea8-4903-9227-eed397cdce47 787 0 2025-04-30 03:31:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-6f0285bad0 coredns-7db6d8ff4d-l2bpt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1a865647175 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2bpt" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-" Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:28.965 [INFO][5223] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2bpt" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.027 [INFO][5247] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" HandleID="k8s-pod-network.3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.042 [INFO][5247] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" HandleID="k8s-pod-network.3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285f70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-6f0285bad0", "pod":"coredns-7db6d8ff4d-l2bpt", "timestamp":"2025-04-30 03:32:29.027419411 +0000 UTC"}, Hostname:"ci-4081.3.3-a-6f0285bad0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.043 [INFO][5247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.043 [INFO][5247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.043 [INFO][5247] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-6f0285bad0' Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.046 [INFO][5247] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.051 [INFO][5247] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.057 [INFO][5247] ipam/ipam.go 489: Trying affinity for 192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.059 [INFO][5247] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.061 [INFO][5247] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.061 [INFO][5247] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.063 [INFO][5247] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.070 [INFO][5247] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.075 [INFO][5247] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.69/26] block=192.168.21.64/26 handle="k8s-pod-network.3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.075 [INFO][5247] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.69/26] handle="k8s-pod-network.3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.075 [INFO][5247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:29.107258 containerd[1797]: 2025-04-30 03:32:29.075 [INFO][5247] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.69/26] IPv6=[] ContainerID="3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" HandleID="k8s-pod-network.3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:29.108229 containerd[1797]: 2025-04-30 03:32:29.079 [INFO][5223] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2bpt" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"340e92ff-7ea8-4903-9227-eed397cdce47", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"", Pod:"coredns-7db6d8ff4d-l2bpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a865647175", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:29.108229 containerd[1797]: 2025-04-30 03:32:29.079 [INFO][5223] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.69/32] ContainerID="3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2bpt" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:29.108229 containerd[1797]: 2025-04-30 03:32:29.079 [INFO][5223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a865647175 ContainerID="3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2bpt" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:29.108229 containerd[1797]: 2025-04-30 03:32:29.084 [INFO][5223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2bpt" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:29.108229 containerd[1797]: 2025-04-30 03:32:29.084 [INFO][5223] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2bpt" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"340e92ff-7ea8-4903-9227-eed397cdce47", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e", Pod:"coredns-7db6d8ff4d-l2bpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a865647175", MAC:"9a:54:75:82:38:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:29.108229 containerd[1797]: 2025-04-30 03:32:29.105 [INFO][5223] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2bpt" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:29.173052 systemd-networkd[1365]: cali010658adb5e: Link UP Apr 30 03:32:29.175805 systemd-networkd[1365]: cali010658adb5e: Gained carrier Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:28.983 [INFO][5232] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0 coredns-7db6d8ff4d- kube-system a2f13ad9-2be6-4a46-b67d-267afc984299 788 0 2025-04-30 03:31:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-6f0285bad0 coredns-7db6d8ff4d-wxch7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali010658adb5e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxch7" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-" Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:28.984 [INFO][5232] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxch7" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.035 [INFO][5252] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" HandleID="k8s-pod-network.8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.054 [INFO][5252] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" HandleID="k8s-pod-network.8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334c50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-6f0285bad0", "pod":"coredns-7db6d8ff4d-wxch7", "timestamp":"2025-04-30 03:32:29.035201301 +0000 UTC"}, Hostname:"ci-4081.3.3-a-6f0285bad0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.054 [INFO][5252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.075 [INFO][5252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.076 [INFO][5252] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-6f0285bad0' Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.078 [INFO][5252] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.104 [INFO][5252] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.123 [INFO][5252] ipam/ipam.go 489: Trying affinity for 192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.125 [INFO][5252] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.132 [INFO][5252] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.132 [INFO][5252] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.137 [INFO][5252] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.146 [INFO][5252] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.161 [INFO][5252] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.70/26] block=192.168.21.64/26 handle="k8s-pod-network.8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.162 [INFO][5252] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.70/26] handle="k8s-pod-network.8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" host="ci-4081.3.3-a-6f0285bad0" Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.162 [INFO][5252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:29.215775 containerd[1797]: 2025-04-30 03:32:29.162 [INFO][5252] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.70/26] IPv6=[] ContainerID="8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" HandleID="k8s-pod-network.8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:29.217065 containerd[1797]: 2025-04-30 03:32:29.167 [INFO][5232] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxch7" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a2f13ad9-2be6-4a46-b67d-267afc984299", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"", Pod:"coredns-7db6d8ff4d-wxch7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali010658adb5e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:29.217065 containerd[1797]: 2025-04-30 03:32:29.167 [INFO][5232] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.70/32] ContainerID="8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxch7" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:29.217065 containerd[1797]: 2025-04-30 03:32:29.167 [INFO][5232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali010658adb5e ContainerID="8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxch7" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:29.217065 containerd[1797]: 2025-04-30 03:32:29.178 [INFO][5232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxch7" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:29.217065 containerd[1797]: 2025-04-30 03:32:29.182 [INFO][5232] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxch7" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a2f13ad9-2be6-4a46-b67d-267afc984299", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c", Pod:"coredns-7db6d8ff4d-wxch7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali010658adb5e", MAC:"92:5c:43:a4:cb:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:29.217065 containerd[1797]: 2025-04-30 03:32:29.209 [INFO][5232] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wxch7" WorkloadEndpoint="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:29.227862 containerd[1797]: time="2025-04-30T03:32:29.226543317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:29.227862 containerd[1797]: time="2025-04-30T03:32:29.226618618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:29.227862 containerd[1797]: time="2025-04-30T03:32:29.226640718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:29.227862 containerd[1797]: time="2025-04-30T03:32:29.226842120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:29.269330 containerd[1797]: time="2025-04-30T03:32:29.269027309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:29.270174 containerd[1797]: time="2025-04-30T03:32:29.269205511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:29.270893 containerd[1797]: time="2025-04-30T03:32:29.270767329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:29.273500 containerd[1797]: time="2025-04-30T03:32:29.272677951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:29.321088 containerd[1797]: time="2025-04-30T03:32:29.321015811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2bpt,Uid:340e92ff-7ea8-4903-9227-eed397cdce47,Namespace:kube-system,Attempt:1,} returns sandbox id \"3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e\"" Apr 30 03:32:29.325139 containerd[1797]: time="2025-04-30T03:32:29.325087658Z" level=info msg="CreateContainer within sandbox \"3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:32:29.335829 systemd-networkd[1365]: cali99fd91f4ec9: Gained IPv6LL Apr 30 03:32:29.349808 containerd[1797]: time="2025-04-30T03:32:29.349763944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxch7,Uid:a2f13ad9-2be6-4a46-b67d-267afc984299,Namespace:kube-system,Attempt:1,} returns sandbox id \"8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c\"" Apr 30 03:32:29.353402 containerd[1797]: time="2025-04-30T03:32:29.353361486Z" level=info msg="CreateContainer within sandbox \"8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:32:29.609986 containerd[1797]: time="2025-04-30T03:32:29.609933158Z" level=info msg="CreateContainer within sandbox \"3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b0db32b9c1aae6f44559984c6e74e3103978800dfab0eb5ee373ed32aee51776\"" Apr 30 03:32:29.610818 containerd[1797]: time="2025-04-30T03:32:29.610759767Z" level=info msg="StartContainer for \"b0db32b9c1aae6f44559984c6e74e3103978800dfab0eb5ee373ed32aee51776\"" Apr 30 03:32:29.647333 containerd[1797]: time="2025-04-30T03:32:29.647177489Z" level=info msg="CreateContainer within sandbox \"8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f75bf46dc4b886e0e9d36e93646ea4b32cbc863553964b798adc57e385b14529\"" Apr 30 03:32:29.650603 containerd[1797]: time="2025-04-30T03:32:29.649145012Z" level=info msg="StartContainer for \"f75bf46dc4b886e0e9d36e93646ea4b32cbc863553964b798adc57e385b14529\"" Apr 30 03:32:29.722585 systemd-networkd[1365]: cali36cbb290ab9: Gained IPv6LL Apr 30 03:32:29.755264 containerd[1797]: time="2025-04-30T03:32:29.755202340Z" level=info msg="StartContainer for \"b0db32b9c1aae6f44559984c6e74e3103978800dfab0eb5ee373ed32aee51776\" returns successfully" Apr 30 03:32:29.856718 containerd[1797]: time="2025-04-30T03:32:29.856446713Z" level=info msg="StartContainer for \"f75bf46dc4b886e0e9d36e93646ea4b32cbc863553964b798adc57e385b14529\" returns successfully" Apr 30 03:32:29.959899 kubelet[3359]: I0430 03:32:29.959609 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wxch7" podStartSLOduration=39.959581208 podStartE2EDuration="39.959581208s" podCreationTimestamp="2025-04-30 03:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:32:29.959270604 +0000 UTC m=+52.946529258" watchObservedRunningTime="2025-04-30 03:32:29.959581208 +0000 UTC m=+52.946839962" Apr 30 03:32:30.008317 kubelet[3359]: I0430 03:32:30.005960 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-l2bpt" podStartSLOduration=40.005831443 podStartE2EDuration="40.005831443s" podCreationTimestamp="2025-04-30 03:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:32:30.004453927 +0000 UTC m=+52.991712681" watchObservedRunningTime="2025-04-30 03:32:30.005831443 +0000 UTC m=+52.993090097" Apr 30 03:32:30.423837 systemd-networkd[1365]: cali1a865647175: Gained IPv6LL Apr 30 03:32:30.838076 containerd[1797]: time="2025-04-30T03:32:30.838001182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:30.841070 containerd[1797]: time="2025-04-30T03:32:30.841024517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:32:30.846480 containerd[1797]: time="2025-04-30T03:32:30.846409479Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:30.852693 containerd[1797]: time="2025-04-30T03:32:30.852523550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:30.853905 containerd[1797]: time="2025-04-30T03:32:30.853489861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.67530187s" Apr 30 03:32:30.853905 containerd[1797]: time="2025-04-30T03:32:30.853532262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:32:30.854816 containerd[1797]: time="2025-04-30T03:32:30.854793877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:32:30.856454 containerd[1797]: time="2025-04-30T03:32:30.856328694Z" level=info msg="CreateContainer within sandbox \"6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:32:30.872930 systemd-networkd[1365]: cali010658adb5e: Gained IPv6LL Apr 30 03:32:30.899289 containerd[1797]: time="2025-04-30T03:32:30.899234891Z" level=info msg="CreateContainer within sandbox \"6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ea87ab5cb16194d7851be57313a78c0eb98bae1f1f8d6b9fae3ef990df48df55\"" Apr 30 03:32:30.900098 containerd[1797]: time="2025-04-30T03:32:30.899867399Z" level=info msg="StartContainer for \"ea87ab5cb16194d7851be57313a78c0eb98bae1f1f8d6b9fae3ef990df48df55\"" Apr 30 03:32:30.983907 containerd[1797]: time="2025-04-30T03:32:30.983848671Z" level=info msg="StartContainer for \"ea87ab5cb16194d7851be57313a78c0eb98bae1f1f8d6b9fae3ef990df48df55\" returns successfully" Apr 30 03:32:32.259711 containerd[1797]: time="2025-04-30T03:32:32.259648849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:32.274406 containerd[1797]: time="2025-04-30T03:32:32.274313919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:32:32.288096 containerd[1797]: time="2025-04-30T03:32:32.287997177Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:32.296516 containerd[1797]: time="2025-04-30T03:32:32.296422875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:32.297713 containerd[1797]: time="2025-04-30T03:32:32.297194884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.441686498s" Apr 30 03:32:32.297713 containerd[1797]: time="2025-04-30T03:32:32.297233784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:32:32.298594 containerd[1797]: time="2025-04-30T03:32:32.298569900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:32:32.299823 containerd[1797]: time="2025-04-30T03:32:32.299782114Z" level=info msg="CreateContainer within sandbox \"b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:32:32.354779 containerd[1797]: time="2025-04-30T03:32:32.354731550Z" level=info msg="CreateContainer within sandbox \"b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a4605cdc83a120cbab304b514cec2459a808ce6370b71ccfb2796372b0cad3c6\"" Apr 30 03:32:32.355721 containerd[1797]: time="2025-04-30T03:32:32.355690461Z" level=info msg="StartContainer for \"a4605cdc83a120cbab304b514cec2459a808ce6370b71ccfb2796372b0cad3c6\"" Apr 30 03:32:32.396150 systemd[1]: run-containerd-runc-k8s.io-a4605cdc83a120cbab304b514cec2459a808ce6370b71ccfb2796372b0cad3c6-runc.sqbaRV.mount: Deactivated successfully. Apr 30 03:32:32.433227 containerd[1797]: time="2025-04-30T03:32:32.433185659Z" level=info msg="StartContainer for \"a4605cdc83a120cbab304b514cec2459a808ce6370b71ccfb2796372b0cad3c6\" returns successfully" Apr 30 03:32:32.664745 containerd[1797]: time="2025-04-30T03:32:32.663977932Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:32.667553 containerd[1797]: time="2025-04-30T03:32:32.667097668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:32:32.669195 containerd[1797]: time="2025-04-30T03:32:32.669164492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 369.50568ms" Apr 30 03:32:32.669324 containerd[1797]: time="2025-04-30T03:32:32.669198792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:32:32.671223 containerd[1797]: time="2025-04-30T03:32:32.671171815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:32:32.672684 containerd[1797]: time="2025-04-30T03:32:32.672656932Z" level=info msg="CreateContainer within sandbox \"634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:32:32.737019 containerd[1797]: time="2025-04-30T03:32:32.736963780Z" level=info msg="CreateContainer within sandbox \"634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c765182c8783b7df0d7b018854db9e911e9479d64d2ad5fdd793293a467f8b2a\"" Apr 30 03:32:32.737876 containerd[1797]: time="2025-04-30T03:32:32.737707489Z" level=info msg="StartContainer for \"c765182c8783b7df0d7b018854db9e911e9479d64d2ad5fdd793293a467f8b2a\"" Apr 30 03:32:32.815994 containerd[1797]: time="2025-04-30T03:32:32.815941400Z" level=info msg="StartContainer for \"c765182c8783b7df0d7b018854db9e911e9479d64d2ad5fdd793293a467f8b2a\" returns successfully" Apr 30 03:32:32.962004 kubelet[3359]: I0430 03:32:32.961853 3359 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:32:32.982395 kubelet[3359]: I0430 03:32:32.980351 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cc8c4d69c-f48sp" podStartSLOduration=30.302780518 podStartE2EDuration="33.980325414s" podCreationTimestamp="2025-04-30 03:31:59 +0000 UTC" firstStartedPulling="2025-04-30 03:32:27.177062078 +0000 UTC m=+50.164320732" lastFinishedPulling="2025-04-30 03:32:30.854606874 +0000 UTC m=+53.841865628" observedRunningTime="2025-04-30 03:32:31.969430587 +0000 UTC m=+54.956689341" watchObservedRunningTime="2025-04-30 03:32:32.980325414 +0000 UTC m=+55.967584168" Apr 30 03:32:33.968490 kubelet[3359]: I0430 03:32:33.966894 3359 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:32:34.965967 containerd[1797]: time="2025-04-30T03:32:34.965906737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:34.970834 containerd[1797]: time="2025-04-30T03:32:34.970763393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:32:34.974869 containerd[1797]: time="2025-04-30T03:32:34.974806240Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:34.979580 containerd[1797]: time="2025-04-30T03:32:34.979542396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:34.980821 containerd[1797]: time="2025-04-30T03:32:34.980247304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.309043288s" Apr 30 03:32:34.980821 containerd[1797]: time="2025-04-30T03:32:34.980289404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:32:34.981492 containerd[1797]: time="2025-04-30T03:32:34.981446718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:32:34.989564 containerd[1797]: time="2025-04-30T03:32:34.989528412Z" level=info msg="CreateContainer within sandbox \"27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:32:35.079412 containerd[1797]: time="2025-04-30T03:32:35.079367358Z" level=info msg="CreateContainer within sandbox \"27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8d64ac66906ed49b2d5994f30272090c1c67d883b8cbec5b09534714263ef5b1\"" Apr 30 03:32:35.080296 containerd[1797]: time="2025-04-30T03:32:35.080175467Z" level=info msg="StartContainer for \"8d64ac66906ed49b2d5994f30272090c1c67d883b8cbec5b09534714263ef5b1\"" Apr 30 03:32:35.157994 containerd[1797]: time="2025-04-30T03:32:35.157941673Z" level=info msg="StartContainer for \"8d64ac66906ed49b2d5994f30272090c1c67d883b8cbec5b09534714263ef5b1\" returns successfully" Apr 30 03:32:35.995352 kubelet[3359]: I0430 03:32:35.995274 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69f9ffdfcc-4vn5q" podStartSLOduration=30.313288685 podStartE2EDuration="36.995250524s" podCreationTimestamp="2025-04-30 03:31:59 +0000 UTC" firstStartedPulling="2025-04-30 03:32:28.299242476 +0000 UTC m=+51.286501130" lastFinishedPulling="2025-04-30 03:32:34.981204315 +0000 UTC m=+57.968462969" observedRunningTime="2025-04-30 03:32:35.994310013 +0000 UTC m=+58.981568767" watchObservedRunningTime="2025-04-30 03:32:35.995250524 +0000 UTC m=+58.982509278" Apr 30 03:32:35.996763 kubelet[3359]: I0430 03:32:35.996163 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cc8c4d69c-vf8m8" podStartSLOduration=32.591420116 podStartE2EDuration="36.996143634s" podCreationTimestamp="2025-04-30 03:31:59 +0000 UTC" firstStartedPulling="2025-04-30 03:32:28.265430485 +0000 UTC m=+51.252689239" lastFinishedPulling="2025-04-30 03:32:32.670154103 +0000 UTC m=+55.657412757" observedRunningTime="2025-04-30 03:32:32.982994845 +0000 UTC m=+55.970253499" watchObservedRunningTime="2025-04-30 03:32:35.996143634 +0000 UTC m=+58.983402288" Apr 30 03:32:36.458345 containerd[1797]: time="2025-04-30T03:32:36.458190815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:36.460718 containerd[1797]: time="2025-04-30T03:32:36.460633143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:32:36.464838 containerd[1797]: time="2025-04-30T03:32:36.464750691Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:36.469484 containerd[1797]: time="2025-04-30T03:32:36.469333544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:36.470435 containerd[1797]: time="2025-04-30T03:32:36.470235255Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.488729737s" Apr 30 03:32:36.470435 containerd[1797]: time="2025-04-30T03:32:36.470281555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:32:36.473735 containerd[1797]: time="2025-04-30T03:32:36.473541493Z" level=info msg="CreateContainer within sandbox \"b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:32:36.509198 containerd[1797]: time="2025-04-30T03:32:36.509131408Z" level=info msg="CreateContainer within sandbox \"b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9f8d54bef79eaff71a2f03d78dd2b7e804d9a5f2fd6fe06b63256bf441e7402d\"" Apr 30 03:32:36.511501 containerd[1797]: time="2025-04-30T03:32:36.509877416Z" level=info msg="StartContainer for \"9f8d54bef79eaff71a2f03d78dd2b7e804d9a5f2fd6fe06b63256bf441e7402d\"" Apr 30 03:32:36.585407 containerd[1797]: time="2025-04-30T03:32:36.585345095Z" level=info msg="StartContainer for \"9f8d54bef79eaff71a2f03d78dd2b7e804d9a5f2fd6fe06b63256bf441e7402d\" returns successfully" Apr 30 03:32:36.789810 kubelet[3359]: I0430 03:32:36.789760 3359 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:32:36.789810 kubelet[3359]: I0430 03:32:36.789798 3359 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:32:36.999736 kubelet[3359]: I0430 03:32:36.998056 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-f5dfm" podStartSLOduration=28.728848102 podStartE2EDuration="37.998034101s" podCreationTimestamp="2025-04-30 03:31:59 +0000 UTC" firstStartedPulling="2025-04-30 03:32:27.20227067 +0000 UTC m=+50.189529424" lastFinishedPulling="2025-04-30 03:32:36.471456769 +0000 UTC m=+59.458715423" observedRunningTime="2025-04-30 03:32:36.997777898 +0000 UTC m=+59.985036552" watchObservedRunningTime="2025-04-30 03:32:36.998034101 +0000 UTC m=+59.985292855" Apr 30 03:32:37.675110 containerd[1797]: time="2025-04-30T03:32:37.675059885Z" level=info msg="StopPodSandbox for \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\"" Apr 30 03:32:37.745054 containerd[1797]: 2025-04-30 03:32:37.711 [WARNING][5712] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0", GenerateName:"calico-kube-controllers-69f9ffdfcc-", Namespace:"calico-system", SelfLink:"", UID:"72a78e6a-2103-489a-9bb3-6f815a567a66", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69f9ffdfcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83", Pod:"calico-kube-controllers-69f9ffdfcc-4vn5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99fd91f4ec9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:37.745054 containerd[1797]: 2025-04-30 03:32:37.712 [INFO][5712] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:37.745054 containerd[1797]: 2025-04-30 03:32:37.712 [INFO][5712] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" iface="eth0" netns="" Apr 30 03:32:37.745054 containerd[1797]: 2025-04-30 03:32:37.712 [INFO][5712] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:37.745054 containerd[1797]: 2025-04-30 03:32:37.712 [INFO][5712] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:37.745054 containerd[1797]: 2025-04-30 03:32:37.732 [INFO][5719] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" HandleID="k8s-pod-network.340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:37.745054 containerd[1797]: 2025-04-30 03:32:37.732 [INFO][5719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:37.745054 containerd[1797]: 2025-04-30 03:32:37.732 [INFO][5719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:37.745054 containerd[1797]: 2025-04-30 03:32:37.738 [WARNING][5719] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" HandleID="k8s-pod-network.340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:37.745054 containerd[1797]: 2025-04-30 03:32:37.738 [INFO][5719] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" HandleID="k8s-pod-network.340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:37.745054 containerd[1797]: 2025-04-30 03:32:37.741 [INFO][5719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:37.745054 containerd[1797]: 2025-04-30 03:32:37.743 [INFO][5712] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:37.747081 containerd[1797]: time="2025-04-30T03:32:37.745168202Z" level=info msg="TearDown network for sandbox \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\" successfully" Apr 30 03:32:37.747081 containerd[1797]: time="2025-04-30T03:32:37.745308403Z" level=info msg="StopPodSandbox for \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\" returns successfully" Apr 30 03:32:37.748938 containerd[1797]: time="2025-04-30T03:32:37.747593730Z" level=info msg="RemovePodSandbox for \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\"" Apr 30 03:32:37.748938 containerd[1797]: time="2025-04-30T03:32:37.747650830Z" level=info msg="Forcibly stopping sandbox \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\"" Apr 30 03:32:37.818201 containerd[1797]: 2025-04-30 03:32:37.787 [WARNING][5737] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0", GenerateName:"calico-kube-controllers-69f9ffdfcc-", Namespace:"calico-system", SelfLink:"", UID:"72a78e6a-2103-489a-9bb3-6f815a567a66", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69f9ffdfcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"27cee19e57591b0ef031f9d28ebb5335a2db8abd2bea79199ef06316929e5d83", Pod:"calico-kube-controllers-69f9ffdfcc-4vn5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99fd91f4ec9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:37.818201 containerd[1797]: 2025-04-30 03:32:37.787 [INFO][5737] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:37.818201 containerd[1797]: 2025-04-30 03:32:37.787 [INFO][5737] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" iface="eth0" netns="" Apr 30 03:32:37.818201 containerd[1797]: 2025-04-30 03:32:37.787 [INFO][5737] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:37.818201 containerd[1797]: 2025-04-30 03:32:37.787 [INFO][5737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:37.818201 containerd[1797]: 2025-04-30 03:32:37.807 [INFO][5744] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" HandleID="k8s-pod-network.340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:37.818201 containerd[1797]: 2025-04-30 03:32:37.807 [INFO][5744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:37.818201 containerd[1797]: 2025-04-30 03:32:37.807 [INFO][5744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:37.818201 containerd[1797]: 2025-04-30 03:32:37.814 [WARNING][5744] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" HandleID="k8s-pod-network.340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:37.818201 containerd[1797]: 2025-04-30 03:32:37.814 [INFO][5744] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" HandleID="k8s-pod-network.340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--kube--controllers--69f9ffdfcc--4vn5q-eth0" Apr 30 03:32:37.818201 containerd[1797]: 2025-04-30 03:32:37.816 [INFO][5744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:37.818201 containerd[1797]: 2025-04-30 03:32:37.817 [INFO][5737] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867" Apr 30 03:32:37.818885 containerd[1797]: time="2025-04-30T03:32:37.818247653Z" level=info msg="TearDown network for sandbox \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\" successfully" Apr 30 03:32:37.827977 containerd[1797]: time="2025-04-30T03:32:37.827920665Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:32:37.828178 containerd[1797]: time="2025-04-30T03:32:37.828021166Z" level=info msg="RemovePodSandbox \"340f467ca93c5baac1334dce68354bd3b598ad6156ae5b965455bc45fb7db867\" returns successfully" Apr 30 03:32:37.828795 containerd[1797]: time="2025-04-30T03:32:37.828762275Z" level=info msg="StopPodSandbox for \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\"" Apr 30 03:32:37.896037 containerd[1797]: 2025-04-30 03:32:37.865 [WARNING][5762] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c8bd750-0601-46f1-814d-82809dd1a74f", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892", Pod:"csi-node-driver-f5dfm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib406e0d4bbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:37.896037 containerd[1797]: 2025-04-30 03:32:37.865 [INFO][5762] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:37.896037 containerd[1797]: 2025-04-30 03:32:37.865 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" iface="eth0" netns="" Apr 30 03:32:37.896037 containerd[1797]: 2025-04-30 03:32:37.865 [INFO][5762] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:37.896037 containerd[1797]: 2025-04-30 03:32:37.865 [INFO][5762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:37.896037 containerd[1797]: 2025-04-30 03:32:37.885 [INFO][5770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" HandleID="k8s-pod-network.a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Workload="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:37.896037 containerd[1797]: 2025-04-30 03:32:37.885 [INFO][5770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:37.896037 containerd[1797]: 2025-04-30 03:32:37.885 [INFO][5770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:37.896037 containerd[1797]: 2025-04-30 03:32:37.892 [WARNING][5770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" HandleID="k8s-pod-network.a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Workload="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:37.896037 containerd[1797]: 2025-04-30 03:32:37.892 [INFO][5770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" HandleID="k8s-pod-network.a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Workload="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:37.896037 containerd[1797]: 2025-04-30 03:32:37.893 [INFO][5770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:37.896037 containerd[1797]: 2025-04-30 03:32:37.894 [INFO][5762] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:37.896752 containerd[1797]: time="2025-04-30T03:32:37.896083659Z" level=info msg="TearDown network for sandbox \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\" successfully" Apr 30 03:32:37.896752 containerd[1797]: time="2025-04-30T03:32:37.896118959Z" level=info msg="StopPodSandbox for \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\" returns successfully" Apr 30 03:32:37.896832 containerd[1797]: time="2025-04-30T03:32:37.896747867Z" level=info msg="RemovePodSandbox for \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\"" Apr 30 03:32:37.896832 containerd[1797]: time="2025-04-30T03:32:37.896785067Z" level=info msg="Forcibly stopping sandbox \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\"" Apr 30 03:32:37.972874 containerd[1797]: 2025-04-30 03:32:37.935 [WARNING][5788] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c8bd750-0601-46f1-814d-82809dd1a74f", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"b0f5a2837b9e826d8551d183b3b404572fb6eff760093b871e6cbd760c288892", Pod:"csi-node-driver-f5dfm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib406e0d4bbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:37.972874 containerd[1797]: 2025-04-30 03:32:37.935 [INFO][5788] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:37.972874 containerd[1797]: 2025-04-30 03:32:37.935 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" iface="eth0" netns="" Apr 30 03:32:37.972874 containerd[1797]: 2025-04-30 03:32:37.935 [INFO][5788] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:37.972874 containerd[1797]: 2025-04-30 03:32:37.935 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:37.972874 containerd[1797]: 2025-04-30 03:32:37.956 [INFO][5795] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" HandleID="k8s-pod-network.a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Workload="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:37.972874 containerd[1797]: 2025-04-30 03:32:37.956 [INFO][5795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:37.972874 containerd[1797]: 2025-04-30 03:32:37.957 [INFO][5795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:37.972874 containerd[1797]: 2025-04-30 03:32:37.965 [WARNING][5795] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" HandleID="k8s-pod-network.a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Workload="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:37.972874 containerd[1797]: 2025-04-30 03:32:37.965 [INFO][5795] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" HandleID="k8s-pod-network.a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Workload="ci--4081.3.3--a--6f0285bad0-k8s-csi--node--driver--f5dfm-eth0" Apr 30 03:32:37.972874 containerd[1797]: 2025-04-30 03:32:37.968 [INFO][5795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:37.972874 containerd[1797]: 2025-04-30 03:32:37.970 [INFO][5788] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98" Apr 30 03:32:37.972874 containerd[1797]: time="2025-04-30T03:32:37.972691551Z" level=info msg="TearDown network for sandbox \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\" successfully" Apr 30 03:32:37.984659 containerd[1797]: time="2025-04-30T03:32:37.983966182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:32:37.984659 containerd[1797]: time="2025-04-30T03:32:37.984056883Z" level=info msg="RemovePodSandbox \"a89eb1912ebb90dcd37179a5646fb217c69a11bb6de1aaff1de0339da562ed98\" returns successfully" Apr 30 03:32:37.985476 containerd[1797]: time="2025-04-30T03:32:37.985082295Z" level=info msg="StopPodSandbox for \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\"" Apr 30 03:32:38.067834 containerd[1797]: 2025-04-30 03:32:38.038 [WARNING][5813] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a2f13ad9-2be6-4a46-b67d-267afc984299", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c", Pod:"coredns-7db6d8ff4d-wxch7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali010658adb5e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:38.067834 containerd[1797]: 2025-04-30 03:32:38.038 [INFO][5813] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:38.067834 containerd[1797]: 2025-04-30 03:32:38.038 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" iface="eth0" netns="" Apr 30 03:32:38.067834 containerd[1797]: 2025-04-30 03:32:38.038 [INFO][5813] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:38.067834 containerd[1797]: 2025-04-30 03:32:38.038 [INFO][5813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:38.067834 containerd[1797]: 2025-04-30 03:32:38.058 [INFO][5820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" HandleID="k8s-pod-network.9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:38.067834 containerd[1797]: 2025-04-30 03:32:38.058 [INFO][5820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:38.067834 containerd[1797]: 2025-04-30 03:32:38.058 [INFO][5820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:38.067834 containerd[1797]: 2025-04-30 03:32:38.064 [WARNING][5820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" HandleID="k8s-pod-network.9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:38.067834 containerd[1797]: 2025-04-30 03:32:38.064 [INFO][5820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" HandleID="k8s-pod-network.9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:38.067834 containerd[1797]: 2025-04-30 03:32:38.065 [INFO][5820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:38.067834 containerd[1797]: 2025-04-30 03:32:38.066 [INFO][5813] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:38.068847 containerd[1797]: time="2025-04-30T03:32:38.067881360Z" level=info msg="TearDown network for sandbox \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\" successfully" Apr 30 03:32:38.068847 containerd[1797]: time="2025-04-30T03:32:38.067918660Z" level=info msg="StopPodSandbox for \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\" returns successfully" Apr 30 03:32:38.069006 containerd[1797]: time="2025-04-30T03:32:38.068975672Z" level=info msg="RemovePodSandbox for \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\"" Apr 30 03:32:38.069068 containerd[1797]: time="2025-04-30T03:32:38.069039173Z" level=info msg="Forcibly stopping sandbox \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\"" Apr 30 03:32:38.136639 containerd[1797]: 2025-04-30 03:32:38.105 [WARNING][5838] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a2f13ad9-2be6-4a46-b67d-267afc984299", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"8939cacd7a4113038ee8ae304fbfdf0c48a2955bd170ab276d407d4e5a0ef75c", Pod:"coredns-7db6d8ff4d-wxch7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali010658adb5e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:38.136639 containerd[1797]: 2025-04-30 03:32:38.106 [INFO][5838] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:38.136639 containerd[1797]: 2025-04-30 03:32:38.106 [INFO][5838] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" iface="eth0" netns="" Apr 30 03:32:38.136639 containerd[1797]: 2025-04-30 03:32:38.106 [INFO][5838] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:38.136639 containerd[1797]: 2025-04-30 03:32:38.106 [INFO][5838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:38.136639 containerd[1797]: 2025-04-30 03:32:38.125 [INFO][5846] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" HandleID="k8s-pod-network.9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:38.136639 containerd[1797]: 2025-04-30 03:32:38.126 [INFO][5846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:38.136639 containerd[1797]: 2025-04-30 03:32:38.126 [INFO][5846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:38.136639 containerd[1797]: 2025-04-30 03:32:38.133 [WARNING][5846] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" HandleID="k8s-pod-network.9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:38.136639 containerd[1797]: 2025-04-30 03:32:38.133 [INFO][5846] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" HandleID="k8s-pod-network.9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--wxch7-eth0" Apr 30 03:32:38.136639 containerd[1797]: 2025-04-30 03:32:38.134 [INFO][5846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:38.136639 containerd[1797]: 2025-04-30 03:32:38.135 [INFO][5838] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2" Apr 30 03:32:38.137496 containerd[1797]: time="2025-04-30T03:32:38.136672561Z" level=info msg="TearDown network for sandbox \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\" successfully" Apr 30 03:32:38.146871 containerd[1797]: time="2025-04-30T03:32:38.146785778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:32:38.147045 containerd[1797]: time="2025-04-30T03:32:38.146891580Z" level=info msg="RemovePodSandbox \"9ff46c6d4c00dc50af1643af381e6ba1aa38504826bea9f781fca889e72e0cf2\" returns successfully" Apr 30 03:32:38.147609 containerd[1797]: time="2025-04-30T03:32:38.147575688Z" level=info msg="StopPodSandbox for \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\"" Apr 30 03:32:38.216973 containerd[1797]: 2025-04-30 03:32:38.184 [WARNING][5864] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0", GenerateName:"calico-apiserver-6cc8c4d69c-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c4d69c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681", Pod:"calico-apiserver-6cc8c4d69c-f48sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa2b857b196", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:38.216973 containerd[1797]: 2025-04-30 03:32:38.185 [INFO][5864] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:38.216973 containerd[1797]: 2025-04-30 03:32:38.185 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" iface="eth0" netns="" Apr 30 03:32:38.216973 containerd[1797]: 2025-04-30 03:32:38.185 [INFO][5864] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:38.216973 containerd[1797]: 2025-04-30 03:32:38.185 [INFO][5864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:38.216973 containerd[1797]: 2025-04-30 03:32:38.207 [INFO][5871] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" HandleID="k8s-pod-network.85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:38.216973 containerd[1797]: 2025-04-30 03:32:38.207 [INFO][5871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:38.216973 containerd[1797]: 2025-04-30 03:32:38.207 [INFO][5871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:38.216973 containerd[1797]: 2025-04-30 03:32:38.213 [WARNING][5871] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" HandleID="k8s-pod-network.85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:38.216973 containerd[1797]: 2025-04-30 03:32:38.213 [INFO][5871] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" HandleID="k8s-pod-network.85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:38.216973 containerd[1797]: 2025-04-30 03:32:38.214 [INFO][5871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:38.216973 containerd[1797]: 2025-04-30 03:32:38.215 [INFO][5864] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:38.217684 containerd[1797]: time="2025-04-30T03:32:38.217014896Z" level=info msg="TearDown network for sandbox \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\" successfully" Apr 30 03:32:38.217684 containerd[1797]: time="2025-04-30T03:32:38.217052697Z" level=info msg="StopPodSandbox for \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\" returns successfully" Apr 30 03:32:38.217771 containerd[1797]: time="2025-04-30T03:32:38.217675004Z" level=info msg="RemovePodSandbox for \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\"" Apr 30 03:32:38.217771 containerd[1797]: time="2025-04-30T03:32:38.217721105Z" level=info msg="Forcibly stopping sandbox \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\"" Apr 30 03:32:38.285484 containerd[1797]: 2025-04-30 03:32:38.254 [WARNING][5889] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0", GenerateName:"calico-apiserver-6cc8c4d69c-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc3a5f85-0fb4-493a-aa6c-4c1eb7d6d656", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c4d69c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"6f2357d159f374aaa01216851322464b8e18d2538f03519dbdf7828c38c49681", Pod:"calico-apiserver-6cc8c4d69c-f48sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa2b857b196", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:38.285484 containerd[1797]: 2025-04-30 03:32:38.255 [INFO][5889] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:38.285484 containerd[1797]: 2025-04-30 03:32:38.255 [INFO][5889] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" iface="eth0" netns="" Apr 30 03:32:38.285484 containerd[1797]: 2025-04-30 03:32:38.255 [INFO][5889] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:38.285484 containerd[1797]: 2025-04-30 03:32:38.255 [INFO][5889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:38.285484 containerd[1797]: 2025-04-30 03:32:38.274 [INFO][5896] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" HandleID="k8s-pod-network.85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:38.285484 containerd[1797]: 2025-04-30 03:32:38.274 [INFO][5896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:38.285484 containerd[1797]: 2025-04-30 03:32:38.274 [INFO][5896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:38.285484 containerd[1797]: 2025-04-30 03:32:38.281 [WARNING][5896] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" HandleID="k8s-pod-network.85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:38.285484 containerd[1797]: 2025-04-30 03:32:38.281 [INFO][5896] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" HandleID="k8s-pod-network.85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--f48sp-eth0" Apr 30 03:32:38.285484 containerd[1797]: 2025-04-30 03:32:38.283 [INFO][5896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:38.285484 containerd[1797]: 2025-04-30 03:32:38.284 [INFO][5889] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16" Apr 30 03:32:38.286137 containerd[1797]: time="2025-04-30T03:32:38.285542194Z" level=info msg="TearDown network for sandbox \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\" successfully" Apr 30 03:32:38.297509 containerd[1797]: time="2025-04-30T03:32:38.297435433Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:32:38.297677 containerd[1797]: time="2025-04-30T03:32:38.297535534Z" level=info msg="RemovePodSandbox \"85b55cb0759abfe43db214e231b5ba5faf38b6b8704305eaf72031bfc280bc16\" returns successfully" Apr 30 03:32:38.298214 containerd[1797]: time="2025-04-30T03:32:38.298182041Z" level=info msg="StopPodSandbox for \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\"" Apr 30 03:32:38.365413 containerd[1797]: 2025-04-30 03:32:38.332 [WARNING][5914] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"340e92ff-7ea8-4903-9227-eed397cdce47", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e", Pod:"coredns-7db6d8ff4d-l2bpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a865647175", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:38.365413 containerd[1797]: 2025-04-30 03:32:38.333 [INFO][5914] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:38.365413 containerd[1797]: 2025-04-30 03:32:38.333 [INFO][5914] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" iface="eth0" netns="" Apr 30 03:32:38.365413 containerd[1797]: 2025-04-30 03:32:38.333 [INFO][5914] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:38.365413 containerd[1797]: 2025-04-30 03:32:38.333 [INFO][5914] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:38.365413 containerd[1797]: 2025-04-30 03:32:38.354 [INFO][5922] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" HandleID="k8s-pod-network.83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:38.365413 containerd[1797]: 2025-04-30 03:32:38.355 [INFO][5922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:38.365413 containerd[1797]: 2025-04-30 03:32:38.355 [INFO][5922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:38.365413 containerd[1797]: 2025-04-30 03:32:38.361 [WARNING][5922] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" HandleID="k8s-pod-network.83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:38.365413 containerd[1797]: 2025-04-30 03:32:38.361 [INFO][5922] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" HandleID="k8s-pod-network.83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:38.365413 containerd[1797]: 2025-04-30 03:32:38.363 [INFO][5922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:38.365413 containerd[1797]: 2025-04-30 03:32:38.364 [INFO][5914] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:38.366431 containerd[1797]: time="2025-04-30T03:32:38.365426125Z" level=info msg="TearDown network for sandbox \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\" successfully" Apr 30 03:32:38.366431 containerd[1797]: time="2025-04-30T03:32:38.365460325Z" level=info msg="StopPodSandbox for \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\" returns successfully" Apr 30 03:32:38.366431 containerd[1797]: time="2025-04-30T03:32:38.366083832Z" level=info msg="RemovePodSandbox for \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\"" Apr 30 03:32:38.366431 containerd[1797]: time="2025-04-30T03:32:38.366123933Z" level=info msg="Forcibly stopping sandbox \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\"" Apr 30 03:32:38.431153 containerd[1797]: 2025-04-30 03:32:38.401 [WARNING][5940] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"340e92ff-7ea8-4903-9227-eed397cdce47", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"3a65bf58d56711d0dc9acbf6baf95426e60e0df345bd1f07cf6fd989de747c5e", Pod:"coredns-7db6d8ff4d-l2bpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a865647175", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:38.431153 containerd[1797]: 2025-04-30 03:32:38.401 [INFO][5940] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:38.431153 containerd[1797]: 2025-04-30 03:32:38.401 [INFO][5940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" iface="eth0" netns="" Apr 30 03:32:38.431153 containerd[1797]: 2025-04-30 03:32:38.401 [INFO][5940] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:38.431153 containerd[1797]: 2025-04-30 03:32:38.401 [INFO][5940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:38.431153 containerd[1797]: 2025-04-30 03:32:38.422 [INFO][5948] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" HandleID="k8s-pod-network.83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:38.431153 containerd[1797]: 2025-04-30 03:32:38.422 [INFO][5948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:38.431153 containerd[1797]: 2025-04-30 03:32:38.422 [INFO][5948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:38.431153 containerd[1797]: 2025-04-30 03:32:38.427 [WARNING][5948] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" HandleID="k8s-pod-network.83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:38.431153 containerd[1797]: 2025-04-30 03:32:38.427 [INFO][5948] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" HandleID="k8s-pod-network.83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Workload="ci--4081.3.3--a--6f0285bad0-k8s-coredns--7db6d8ff4d--l2bpt-eth0" Apr 30 03:32:38.431153 containerd[1797]: 2025-04-30 03:32:38.429 [INFO][5948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:38.431153 containerd[1797]: 2025-04-30 03:32:38.429 [INFO][5940] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64" Apr 30 03:32:38.431972 containerd[1797]: time="2025-04-30T03:32:38.431205591Z" level=info msg="TearDown network for sandbox \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\" successfully" Apr 30 03:32:38.441620 containerd[1797]: time="2025-04-30T03:32:38.441572411Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:32:38.441790 containerd[1797]: time="2025-04-30T03:32:38.441667712Z" level=info msg="RemovePodSandbox \"83259c1b0e982d47114d4c5eba1aa8b35164d5805bdaceb334e6d582fe0c9b64\" returns successfully" Apr 30 03:32:38.442331 containerd[1797]: time="2025-04-30T03:32:38.442293320Z" level=info msg="StopPodSandbox for \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\"" Apr 30 03:32:38.509741 containerd[1797]: 2025-04-30 03:32:38.476 [WARNING][5966] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0", GenerateName:"calico-apiserver-6cc8c4d69c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f183badf-71b7-4297-a2f8-acdc049a5567", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c4d69c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e", Pod:"calico-apiserver-6cc8c4d69c-vf8m8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36cbb290ab9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:38.509741 containerd[1797]: 2025-04-30 03:32:38.477 [INFO][5966] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:38.509741 containerd[1797]: 2025-04-30 03:32:38.477 [INFO][5966] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" iface="eth0" netns="" Apr 30 03:32:38.509741 containerd[1797]: 2025-04-30 03:32:38.477 [INFO][5966] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:38.509741 containerd[1797]: 2025-04-30 03:32:38.477 [INFO][5966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:38.509741 containerd[1797]: 2025-04-30 03:32:38.496 [INFO][5973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" HandleID="k8s-pod-network.651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:38.509741 containerd[1797]: 2025-04-30 03:32:38.496 [INFO][5973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:38.509741 containerd[1797]: 2025-04-30 03:32:38.496 [INFO][5973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:38.509741 containerd[1797]: 2025-04-30 03:32:38.506 [WARNING][5973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" HandleID="k8s-pod-network.651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:38.509741 containerd[1797]: 2025-04-30 03:32:38.506 [INFO][5973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" HandleID="k8s-pod-network.651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:38.509741 containerd[1797]: 2025-04-30 03:32:38.507 [INFO][5973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:38.509741 containerd[1797]: 2025-04-30 03:32:38.508 [INFO][5966] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:38.510359 containerd[1797]: time="2025-04-30T03:32:38.509787606Z" level=info msg="TearDown network for sandbox \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\" successfully" Apr 30 03:32:38.510359 containerd[1797]: time="2025-04-30T03:32:38.509822906Z" level=info msg="StopPodSandbox for \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\" returns successfully" Apr 30 03:32:38.510439 containerd[1797]: time="2025-04-30T03:32:38.510378113Z" level=info msg="RemovePodSandbox for \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\"" Apr 30 03:32:38.510439 containerd[1797]: time="2025-04-30T03:32:38.510414113Z" level=info msg="Forcibly stopping sandbox \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\"" Apr 30 03:32:38.573315 containerd[1797]: 2025-04-30 03:32:38.544 [WARNING][5991] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0", GenerateName:"calico-apiserver-6cc8c4d69c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f183badf-71b7-4297-a2f8-acdc049a5567", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c4d69c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-6f0285bad0", ContainerID:"634ac85104776d5435f83ee80514295c4c22c865325c1f70016dfe304788959e", Pod:"calico-apiserver-6cc8c4d69c-vf8m8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36cbb290ab9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:32:38.573315 containerd[1797]: 2025-04-30 03:32:38.544 [INFO][5991] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:38.573315 containerd[1797]: 2025-04-30 03:32:38.544 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" iface="eth0" netns="" Apr 30 03:32:38.573315 containerd[1797]: 2025-04-30 03:32:38.544 [INFO][5991] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:38.573315 containerd[1797]: 2025-04-30 03:32:38.544 [INFO][5991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:38.573315 containerd[1797]: 2025-04-30 03:32:38.563 [INFO][5998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" HandleID="k8s-pod-network.651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:38.573315 containerd[1797]: 2025-04-30 03:32:38.563 [INFO][5998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:32:38.573315 containerd[1797]: 2025-04-30 03:32:38.563 [INFO][5998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:32:38.573315 containerd[1797]: 2025-04-30 03:32:38.569 [WARNING][5998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" HandleID="k8s-pod-network.651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:38.573315 containerd[1797]: 2025-04-30 03:32:38.569 [INFO][5998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" HandleID="k8s-pod-network.651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Workload="ci--4081.3.3--a--6f0285bad0-k8s-calico--apiserver--6cc8c4d69c--vf8m8-eth0" Apr 30 03:32:38.573315 containerd[1797]: 2025-04-30 03:32:38.571 [INFO][5998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:32:38.573315 containerd[1797]: 2025-04-30 03:32:38.572 [INFO][5991] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39" Apr 30 03:32:38.573315 containerd[1797]: time="2025-04-30T03:32:38.573271445Z" level=info msg="TearDown network for sandbox \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\" successfully" Apr 30 03:32:38.584749 containerd[1797]: time="2025-04-30T03:32:38.584674678Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:32:38.584892 containerd[1797]: time="2025-04-30T03:32:38.584766779Z" level=info msg="RemovePodSandbox \"651d6f1a422533ad7b867c44ac31c65bd81730ecfdde4366ab7174d4d8572b39\" returns successfully" Apr 30 03:32:40.497807 kubelet[3359]: I0430 03:32:40.497583 3359 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:32:47.265940 kubelet[3359]: I0430 03:32:47.265531 3359 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:32:48.566770 systemd[1]: Started sshd@7-10.200.8.29:22-10.200.16.10:46080.service - OpenSSH per-connection server daemon (10.200.16.10:46080). Apr 30 03:32:49.189338 sshd[6038]: Accepted publickey for core from 10.200.16.10 port 46080 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:32:49.191044 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:49.196447 systemd-logind[1775]: New session 10 of user core. Apr 30 03:32:49.202455 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:32:49.700200 sshd[6038]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:49.704374 systemd[1]: sshd@7-10.200.8.29:22-10.200.16.10:46080.service: Deactivated successfully. Apr 30 03:32:49.709454 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:32:49.710438 systemd-logind[1775]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:32:49.711432 systemd-logind[1775]: Removed session 10. Apr 30 03:32:54.812002 systemd[1]: Started sshd@8-10.200.8.29:22-10.200.16.10:41962.service - OpenSSH per-connection server daemon (10.200.16.10:41962). Apr 30 03:32:55.434051 sshd[6057]: Accepted publickey for core from 10.200.16.10 port 41962 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:32:55.435699 sshd[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:55.439919 systemd-logind[1775]: New session 11 of user core. Apr 30 03:32:55.444757 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:32:55.938237 sshd[6057]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:55.941604 systemd[1]: sshd@8-10.200.8.29:22-10.200.16.10:41962.service: Deactivated successfully. Apr 30 03:32:55.947068 systemd-logind[1775]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:32:55.947907 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:32:55.949287 systemd-logind[1775]: Removed session 11. Apr 30 03:33:01.046176 systemd[1]: Started sshd@9-10.200.8.29:22-10.200.16.10:58366.service - OpenSSH per-connection server daemon (10.200.16.10:58366). Apr 30 03:33:01.680260 sshd[6072]: Accepted publickey for core from 10.200.16.10 port 58366 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:01.681969 sshd[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:01.686328 systemd-logind[1775]: New session 12 of user core. Apr 30 03:33:01.692742 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:33:02.185181 sshd[6072]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:02.188912 systemd[1]: sshd@9-10.200.8.29:22-10.200.16.10:58366.service: Deactivated successfully. Apr 30 03:33:02.194106 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:33:02.195273 systemd-logind[1775]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:33:02.196797 systemd-logind[1775]: Removed session 12. Apr 30 03:33:02.294865 systemd[1]: Started sshd@10-10.200.8.29:22-10.200.16.10:58382.service - OpenSSH per-connection server daemon (10.200.16.10:58382). Apr 30 03:33:02.913503 sshd[6086]: Accepted publickey for core from 10.200.16.10 port 58382 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:02.915318 sshd[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:02.919691 systemd-logind[1775]: New session 13 of user core. Apr 30 03:33:02.925281 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:33:03.456341 sshd[6086]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:03.460776 systemd[1]: sshd@10-10.200.8.29:22-10.200.16.10:58382.service: Deactivated successfully. Apr 30 03:33:03.465657 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:33:03.466667 systemd-logind[1775]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:33:03.467803 systemd-logind[1775]: Removed session 13. Apr 30 03:33:03.565571 systemd[1]: Started sshd@11-10.200.8.29:22-10.200.16.10:58392.service - OpenSSH per-connection server daemon (10.200.16.10:58392). Apr 30 03:33:04.197964 sshd[6104]: Accepted publickey for core from 10.200.16.10 port 58392 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:04.199722 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:04.204385 systemd-logind[1775]: New session 14 of user core. Apr 30 03:33:04.208737 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:33:04.700781 sshd[6104]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:04.704605 systemd[1]: sshd@11-10.200.8.29:22-10.200.16.10:58392.service: Deactivated successfully. Apr 30 03:33:04.710146 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:33:04.711098 systemd-logind[1775]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:33:04.712666 systemd-logind[1775]: Removed session 14. Apr 30 03:33:07.268701 systemd[1]: run-containerd-runc-k8s.io-15aa1924b21c515d3f4e9a2be387ee66ea071f86f1086fda2d4ed39affad2266-runc.g6TPqK.mount: Deactivated successfully. Apr 30 03:33:09.814044 systemd[1]: Started sshd@12-10.200.8.29:22-10.200.16.10:50892.service - OpenSSH per-connection server daemon (10.200.16.10:50892). Apr 30 03:33:10.449191 sshd[6143]: Accepted publickey for core from 10.200.16.10 port 50892 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:10.450835 sshd[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:10.455199 systemd-logind[1775]: New session 15 of user core. Apr 30 03:33:10.460822 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:33:10.955825 sshd[6143]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:10.960781 systemd[1]: sshd@12-10.200.8.29:22-10.200.16.10:50892.service: Deactivated successfully. Apr 30 03:33:10.966075 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:33:10.967610 systemd-logind[1775]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:33:10.969532 systemd-logind[1775]: Removed session 15. Apr 30 03:33:16.064799 systemd[1]: Started sshd@13-10.200.8.29:22-10.200.16.10:50904.service - OpenSSH per-connection server daemon (10.200.16.10:50904). Apr 30 03:33:16.684702 sshd[6199]: Accepted publickey for core from 10.200.16.10 port 50904 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:16.686449 sshd[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:16.690805 systemd-logind[1775]: New session 16 of user core. Apr 30 03:33:16.693817 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:33:17.185730 sshd[6199]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:17.190617 systemd[1]: sshd@13-10.200.8.29:22-10.200.16.10:50904.service: Deactivated successfully. Apr 30 03:33:17.195154 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:33:17.196139 systemd-logind[1775]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:33:17.197201 systemd-logind[1775]: Removed session 16. Apr 30 03:33:22.295412 systemd[1]: Started sshd@14-10.200.8.29:22-10.200.16.10:57102.service - OpenSSH per-connection server daemon (10.200.16.10:57102). Apr 30 03:33:22.918242 sshd[6213]: Accepted publickey for core from 10.200.16.10 port 57102 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:22.919894 sshd[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:22.924762 systemd-logind[1775]: New session 17 of user core. Apr 30 03:33:22.929707 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:33:23.432150 sshd[6213]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:23.435771 systemd[1]: sshd@14-10.200.8.29:22-10.200.16.10:57102.service: Deactivated successfully. Apr 30 03:33:23.441472 systemd-logind[1775]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:33:23.442788 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:33:23.444397 systemd-logind[1775]: Removed session 17. Apr 30 03:33:23.548757 systemd[1]: Started sshd@15-10.200.8.29:22-10.200.16.10:57108.service - OpenSSH per-connection server daemon (10.200.16.10:57108). Apr 30 03:33:24.207039 sshd[6229]: Accepted publickey for core from 10.200.16.10 port 57108 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:24.208707 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:24.213109 systemd-logind[1775]: New session 18 of user core. Apr 30 03:33:24.220010 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:33:24.782875 sshd[6229]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:24.786731 systemd[1]: sshd@15-10.200.8.29:22-10.200.16.10:57108.service: Deactivated successfully. Apr 30 03:33:24.794031 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:33:24.794936 systemd-logind[1775]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:33:24.795987 systemd-logind[1775]: Removed session 18. Apr 30 03:33:24.892178 systemd[1]: Started sshd@16-10.200.8.29:22-10.200.16.10:57124.service - OpenSSH per-connection server daemon (10.200.16.10:57124). Apr 30 03:33:25.513245 sshd[6240]: Accepted publickey for core from 10.200.16.10 port 57124 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:25.515286 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:25.521165 systemd-logind[1775]: New session 19 of user core. Apr 30 03:33:25.524772 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:33:27.675097 sshd[6240]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:27.679842 systemd[1]: sshd@16-10.200.8.29:22-10.200.16.10:57124.service: Deactivated successfully. Apr 30 03:33:27.686001 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:33:27.686902 systemd-logind[1775]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:33:27.688022 systemd-logind[1775]: Removed session 19. Apr 30 03:33:27.783772 systemd[1]: Started sshd@17-10.200.8.29:22-10.200.16.10:57126.service - OpenSSH per-connection server daemon (10.200.16.10:57126). Apr 30 03:33:28.406579 sshd[6259]: Accepted publickey for core from 10.200.16.10 port 57126 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:28.414158 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:28.418506 systemd-logind[1775]: New session 20 of user core. Apr 30 03:33:28.422963 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:33:29.015551 sshd[6259]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:29.019234 systemd[1]: sshd@17-10.200.8.29:22-10.200.16.10:57126.service: Deactivated successfully. Apr 30 03:33:29.025895 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:33:29.026945 systemd-logind[1775]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:33:29.027983 systemd-logind[1775]: Removed session 20. Apr 30 03:33:29.126813 systemd[1]: Started sshd@18-10.200.8.29:22-10.200.16.10:43168.service - OpenSSH per-connection server daemon (10.200.16.10:43168). Apr 30 03:33:29.747613 sshd[6270]: Accepted publickey for core from 10.200.16.10 port 43168 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:29.749213 sshd[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:29.753502 systemd-logind[1775]: New session 21 of user core. Apr 30 03:33:29.758858 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:33:30.255290 sshd[6270]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:30.259043 systemd[1]: sshd@18-10.200.8.29:22-10.200.16.10:43168.service: Deactivated successfully. Apr 30 03:33:30.265614 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:33:30.266612 systemd-logind[1775]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:33:30.267658 systemd-logind[1775]: Removed session 21. Apr 30 03:33:35.367822 systemd[1]: Started sshd@19-10.200.8.29:22-10.200.16.10:43174.service - OpenSSH per-connection server daemon (10.200.16.10:43174). Apr 30 03:33:35.989648 sshd[6287]: Accepted publickey for core from 10.200.16.10 port 43174 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:35.991482 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:35.995979 systemd-logind[1775]: New session 22 of user core. Apr 30 03:33:36.002722 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:33:36.489845 sshd[6287]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:36.493138 systemd[1]: sshd@19-10.200.8.29:22-10.200.16.10:43174.service: Deactivated successfully. Apr 30 03:33:36.498345 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:33:36.498512 systemd-logind[1775]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:33:36.500775 systemd-logind[1775]: Removed session 22. Apr 30 03:33:37.270196 systemd[1]: run-containerd-runc-k8s.io-15aa1924b21c515d3f4e9a2be387ee66ea071f86f1086fda2d4ed39affad2266-runc.gih9LX.mount: Deactivated successfully. Apr 30 03:33:41.600953 systemd[1]: Started sshd@20-10.200.8.29:22-10.200.16.10:51002.service - OpenSSH per-connection server daemon (10.200.16.10:51002). Apr 30 03:33:42.240204 sshd[6323]: Accepted publickey for core from 10.200.16.10 port 51002 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:42.242265 sshd[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:42.250873 systemd-logind[1775]: New session 23 of user core. Apr 30 03:33:42.258415 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:33:42.789430 sshd[6323]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:42.794405 systemd[1]: sshd@20-10.200.8.29:22-10.200.16.10:51002.service: Deactivated successfully. Apr 30 03:33:42.799278 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:33:42.800183 systemd-logind[1775]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:33:42.801260 systemd-logind[1775]: Removed session 23. Apr 30 03:33:47.899802 systemd[1]: Started sshd@21-10.200.8.29:22-10.200.16.10:51012.service - OpenSSH per-connection server daemon (10.200.16.10:51012). Apr 30 03:33:48.520429 sshd[6361]: Accepted publickey for core from 10.200.16.10 port 51012 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:48.522074 sshd[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:48.526459 systemd-logind[1775]: New session 24 of user core. Apr 30 03:33:48.532085 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:33:49.017387 sshd[6361]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:49.020874 systemd[1]: sshd@21-10.200.8.29:22-10.200.16.10:51012.service: Deactivated successfully. Apr 30 03:33:49.027027 systemd-logind[1775]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:33:49.027799 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:33:49.029046 systemd-logind[1775]: Removed session 24. Apr 30 03:33:54.135802 systemd[1]: Started sshd@22-10.200.8.29:22-10.200.16.10:46766.service - OpenSSH per-connection server daemon (10.200.16.10:46766). Apr 30 03:33:54.757176 sshd[6384]: Accepted publickey for core from 10.200.16.10 port 46766 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:33:54.759032 sshd[6384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:54.764354 systemd-logind[1775]: New session 25 of user core. Apr 30 03:33:54.770745 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:33:55.253301 sshd[6384]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:55.257609 systemd[1]: sshd@22-10.200.8.29:22-10.200.16.10:46766.service: Deactivated successfully. Apr 30 03:33:55.262931 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:33:55.263880 systemd-logind[1775]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:33:55.264918 systemd-logind[1775]: Removed session 25. Apr 30 03:34:00.362841 systemd[1]: Started sshd@23-10.200.8.29:22-10.200.16.10:43274.service - OpenSSH per-connection server daemon (10.200.16.10:43274). Apr 30 03:34:00.984737 sshd[6411]: Accepted publickey for core from 10.200.16.10 port 43274 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:34:00.986821 sshd[6411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:00.992125 systemd-logind[1775]: New session 26 of user core. Apr 30 03:34:00.997777 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:34:01.495071 sshd[6411]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:01.498215 systemd[1]: sshd@23-10.200.8.29:22-10.200.16.10:43274.service: Deactivated successfully. Apr 30 03:34:01.504509 systemd-logind[1775]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:34:01.504639 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:34:01.505969 systemd-logind[1775]: Removed session 26.