Apr 30 03:27:58.109940 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:27:58.109978 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:27:58.109992 kernel: BIOS-provided physical RAM map: Apr 30 03:27:58.110003 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:27:58.110012 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 30 03:27:58.110022 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Apr 30 03:27:58.110034 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Apr 30 03:27:58.110048 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Apr 30 03:27:58.110058 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 30 03:27:58.110070 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 30 03:27:58.110081 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 30 03:27:58.110111 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 30 03:27:58.110121 kernel: printk: bootconsole [earlyser0] enabled Apr 30 03:27:58.110131 kernel: NX (Execute Disable) protection: active Apr 30 03:27:58.110148 kernel: APIC: Static calls initialized Apr 30 03:27:58.110158 kernel: efi: EFI v2.7 by Microsoft Apr 30 03:27:58.110169 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Apr 30 03:27:58.110179 kernel: SMBIOS 3.1.0 present. Apr 30 03:27:58.110189 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Apr 30 03:27:58.110200 kernel: Hypervisor detected: Microsoft Hyper-V Apr 30 03:27:58.110211 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 30 03:27:58.110222 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Apr 30 03:27:58.110234 kernel: Hyper-V: Nested features: 0x1e0101 Apr 30 03:27:58.110245 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 30 03:27:58.110261 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 30 03:27:58.110273 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:27:58.110286 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:27:58.110299 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 30 03:27:58.110311 kernel: tsc: Detected 2593.908 MHz processor Apr 30 03:27:58.110324 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:27:58.110337 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:27:58.110349 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 30 03:27:58.110362 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:27:58.110377 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:27:58.110390 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 30 03:27:58.110401 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 30 03:27:58.110414 kernel: Using GB pages for direct mapping Apr 30 03:27:58.110426 kernel: Secure boot disabled Apr 30 03:27:58.110439 kernel: ACPI: Early table checksum verification disabled Apr 30 03:27:58.110450 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 30 03:27:58.110467 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110481 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110494 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Apr 30 03:27:58.110507 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 30 03:27:58.110519 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110532 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110544 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110560 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110574 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110588 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110601 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110613 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 30 03:27:58.110628 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Apr 30 03:27:58.110642 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 30 03:27:58.110656 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 30 03:27:58.110673 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 30 03:27:58.110685 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 30 03:27:58.110698 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 30 03:27:58.110708 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Apr 30 03:27:58.110720 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 30 03:27:58.110731 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Apr 30 03:27:58.110743 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:27:58.110755 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:27:58.110768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 30 03:27:58.110784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 30 03:27:58.110795 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 30 03:27:58.110806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 30 03:27:58.111699 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 30 03:27:58.111720 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 30 03:27:58.111736 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 30 03:27:58.111750 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 30 03:27:58.111764 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 30 03:27:58.111778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 30 03:27:58.111797 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Apr 30 03:27:58.111811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Apr 30 03:27:58.111826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Apr 30 03:27:58.111840 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Apr 30 03:27:58.111853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Apr 30 03:27:58.111867 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Apr 30 03:27:58.111881 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 30 03:27:58.111895 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 30 03:27:58.111909 kernel: Zone ranges: Apr 30 03:27:58.111927 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:27:58.111940 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 03:27:58.111954 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:27:58.111968 kernel: Movable zone start for each node Apr 30 03:27:58.111982 kernel: Early memory node ranges Apr 30 03:27:58.111996 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:27:58.112012 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Apr 30 03:27:58.112025 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 30 03:27:58.112038 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:27:58.112056 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 30 03:27:58.112070 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:27:58.112083 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:27:58.112119 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Apr 30 03:27:58.112133 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 30 03:27:58.112146 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 30 03:27:58.112160 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:27:58.112173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:27:58.112187 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:27:58.112206 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 30 03:27:58.112220 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:27:58.112233 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 30 03:27:58.112246 kernel: Booting paravirtualized kernel on Hyper-V Apr 30 03:27:58.112260 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:27:58.112275 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:27:58.112288 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:27:58.112302 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:27:58.112316 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:27:58.112332 kernel: Hyper-V: PV spinlocks enabled Apr 30 03:27:58.112346 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:27:58.112362 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:27:58.112376 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:27:58.112390 kernel: random: crng init done Apr 30 03:27:58.112405 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 30 03:27:58.112418 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:27:58.112431 kernel: Fallback order for Node 0: 0 Apr 30 03:27:58.112449 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Apr 30 03:27:58.112474 kernel: Policy zone: Normal Apr 30 03:27:58.112491 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:27:58.112506 kernel: software IO TLB: area num 2. Apr 30 03:27:58.112521 kernel: Memory: 8069608K/8387460K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 317592K reserved, 0K cma-reserved) Apr 30 03:27:58.112535 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:27:58.112548 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:27:58.112562 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:27:58.112576 kernel: Dynamic Preempt: voluntary Apr 30 03:27:58.112590 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:27:58.112605 kernel: rcu: RCU event tracing is enabled. Apr 30 03:27:58.112623 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:27:58.112637 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:27:58.112651 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:27:58.112665 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:27:58.112679 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:27:58.112696 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:27:58.112710 kernel: Using NULL legacy PIC Apr 30 03:27:58.112723 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 30 03:27:58.112737 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:27:58.112752 kernel: Console: colour dummy device 80x25 Apr 30 03:27:58.112768 kernel: printk: console [tty1] enabled Apr 30 03:27:58.112784 kernel: printk: console [ttyS0] enabled Apr 30 03:27:58.112801 kernel: printk: bootconsole [earlyser0] disabled Apr 30 03:27:58.112816 kernel: ACPI: Core revision 20230628 Apr 30 03:27:58.112832 kernel: Failed to register legacy timer interrupt Apr 30 03:27:58.112852 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:27:58.112867 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 03:27:58.112882 kernel: Hyper-V: Using IPI hypercalls Apr 30 03:27:58.112899 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 30 03:27:58.112915 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 30 03:27:58.112931 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 30 03:27:58.112947 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 30 03:27:58.112962 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 30 03:27:58.112979 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 30 03:27:58.112999 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Apr 30 03:27:58.113015 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:27:58.113030 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:27:58.113045 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:27:58.113059 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:27:58.113074 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:27:58.113103 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:27:58.113116 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 03:27:58.113129 kernel: RETBleed: Vulnerable Apr 30 03:27:58.113147 kernel: Speculative Store Bypass: Vulnerable Apr 30 03:27:58.113160 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:27:58.113173 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:27:58.113187 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:27:58.113200 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:27:58.113213 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:27:58.113227 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 03:27:58.113241 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 03:27:58.113255 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 03:27:58.113269 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:27:58.113282 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 30 03:27:58.113298 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 30 03:27:58.113311 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 30 03:27:58.113325 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 30 03:27:58.113338 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:27:58.113351 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:27:58.113364 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:27:58.113379 kernel: landlock: Up and running. Apr 30 03:27:58.113394 kernel: SELinux: Initializing. Apr 30 03:27:58.113409 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:27:58.113425 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:27:58.113439 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 03:27:58.113454 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:27:58.113471 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:27:58.113485 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:27:58.113499 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 03:27:58.113512 kernel: signal: max sigframe size: 3632 Apr 30 03:27:58.113526 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:27:58.113541 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:27:58.113554 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:27:58.113568 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:27:58.113585 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:27:58.113599 kernel: .... node #0, CPUs: #1 Apr 30 03:27:58.113613 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 30 03:27:58.113628 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:27:58.113641 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:27:58.113655 kernel: smpboot: Max logical packages: 1 Apr 30 03:27:58.113669 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Apr 30 03:27:58.113682 kernel: devtmpfs: initialized Apr 30 03:27:58.113697 kernel: x86/mm: Memory block size: 128MB Apr 30 03:27:58.113713 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 30 03:27:58.113727 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:27:58.113741 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:27:58.113754 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:27:58.113768 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:27:58.113782 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:27:58.113796 kernel: audit: type=2000 audit(1745983676.028:1): state=initialized audit_enabled=0 res=1 Apr 30 03:27:58.113810 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:27:58.113824 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:27:58.113840 kernel: cpuidle: using governor menu Apr 30 03:27:58.113854 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:27:58.113867 kernel: dca service started, version 1.12.1 Apr 30 03:27:58.113881 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Apr 30 03:27:58.113895 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:27:58.113909 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:27:58.113923 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:27:58.113937 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:27:58.113951 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:27:58.113967 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:27:58.113982 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:27:58.113996 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:27:58.114010 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:27:58.114023 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:27:58.114037 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:27:58.114051 kernel: ACPI: Interpreter enabled Apr 30 03:27:58.114066 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:27:58.114079 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:27:58.114115 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:27:58.114129 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 03:27:58.114144 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 30 03:27:58.114158 kernel: iommu: Default domain type: Translated Apr 30 03:27:58.114172 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:27:58.114186 kernel: efivars: Registered efivars operations Apr 30 03:27:58.114200 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:27:58.114213 kernel: PCI: System does not support PCI Apr 30 03:27:58.114227 kernel: vgaarb: loaded Apr 30 03:27:58.114246 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 30 03:27:58.114261 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:27:58.114275 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:27:58.114289 kernel: pnp: PnP ACPI init Apr 30 03:27:58.114303 kernel: pnp: PnP ACPI: found 3 devices Apr 30 03:27:58.114318 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:27:58.114332 kernel: NET: Registered PF_INET protocol family Apr 30 03:27:58.114347 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:27:58.114362 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 30 03:27:58.114379 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:27:58.114394 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:27:58.114408 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 03:27:58.114422 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 30 03:27:58.114437 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:27:58.114451 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:27:58.114465 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:27:58.114477 kernel: NET: Registered PF_XDP protocol family Apr 30 03:27:58.114492 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:27:58.114511 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 03:27:58.114527 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Apr 30 03:27:58.114544 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:27:58.114560 kernel: Initialise system trusted keyrings Apr 30 03:27:58.114576 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 30 03:27:58.114593 kernel: Key type asymmetric registered Apr 30 03:27:58.114609 kernel: Asymmetric key parser 'x509' registered Apr 30 03:27:58.114625 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:27:58.114641 kernel: io scheduler mq-deadline registered Apr 30 03:27:58.114660 kernel: io scheduler kyber registered Apr 30 03:27:58.114675 kernel: io scheduler bfq registered Apr 30 03:27:58.114690 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:27:58.114705 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:27:58.114719 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:27:58.114734 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 03:27:58.114748 kernel: i8042: PNP: No PS/2 controller found. Apr 30 03:27:58.114934 kernel: rtc_cmos 00:02: registered as rtc0 Apr 30 03:27:58.115061 kernel: rtc_cmos 00:02: setting system clock to 2025-04-30T03:27:57 UTC (1745983677) Apr 30 03:27:58.115199 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 30 03:27:58.115217 kernel: intel_pstate: CPU model not supported Apr 30 03:27:58.115232 kernel: efifb: probing for efifb Apr 30 03:27:58.115247 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 03:27:58.115261 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 03:27:58.115275 kernel: efifb: scrolling: redraw Apr 30 03:27:58.115290 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 03:27:58.115308 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:27:58.115323 kernel: fb0: EFI VGA frame buffer device Apr 30 03:27:58.115337 kernel: pstore: Using crash dump compression: deflate Apr 30 03:27:58.115351 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:27:58.115366 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:27:58.115381 kernel: Segment Routing with IPv6 Apr 30 03:27:58.115395 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:27:58.115410 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:27:58.115425 kernel: Key type dns_resolver registered Apr 30 03:27:58.115438 kernel: IPI shorthand broadcast: enabled Apr 30 03:27:58.115455 kernel: sched_clock: Marking stable (1004003200, 56629000)->(1330200400, -269568200) Apr 30 03:27:58.115469 kernel: registered taskstats version 1 Apr 30 03:27:58.115484 kernel: Loading compiled-in X.509 certificates Apr 30 03:27:58.115498 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:27:58.115513 kernel: Key type .fscrypt registered Apr 30 03:27:58.115527 kernel: Key type fscrypt-provisioning registered Apr 30 03:27:58.115541 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:27:58.115554 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:27:58.115572 kernel: ima: No architecture policies found Apr 30 03:27:58.115587 kernel: clk: Disabling unused clocks Apr 30 03:27:58.115600 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:27:58.115615 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:27:58.115629 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:27:58.115643 kernel: Run /init as init process Apr 30 03:27:58.115657 kernel: with arguments: Apr 30 03:27:58.115670 kernel: /init Apr 30 03:27:58.115684 kernel: with environment: Apr 30 03:27:58.115701 kernel: HOME=/ Apr 30 03:27:58.115716 kernel: TERM=linux Apr 30 03:27:58.115730 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:27:58.115747 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:27:58.115764 systemd[1]: Detected virtualization microsoft. Apr 30 03:27:58.115780 systemd[1]: Detected architecture x86-64. Apr 30 03:27:58.115793 systemd[1]: Running in initrd. Apr 30 03:27:58.115807 systemd[1]: No hostname configured, using default hostname. Apr 30 03:27:58.115824 systemd[1]: Hostname set to . Apr 30 03:27:58.115839 systemd[1]: Initializing machine ID from random generator. Apr 30 03:27:58.115854 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:27:58.115870 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:27:58.115885 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:27:58.115902 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:27:58.115916 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:27:58.115928 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:27:58.115945 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:27:58.115962 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:27:58.115975 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:27:58.115983 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:27:58.115992 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:27:58.116001 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:27:58.116010 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:27:58.116027 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:27:58.116042 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:27:58.116057 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:27:58.116072 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:27:58.116991 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:27:58.117013 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:27:58.117025 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:27:58.117035 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:27:58.117050 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:27:58.117059 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:27:58.117070 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:27:58.117079 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:27:58.117125 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:27:58.117137 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:27:58.117147 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:27:58.117156 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:27:58.117191 systemd-journald[176]: Collecting audit messages is disabled. Apr 30 03:27:58.117219 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:58.117229 systemd-journald[176]: Journal started Apr 30 03:27:58.117254 systemd-journald[176]: Runtime Journal (/run/log/journal/79624f5138ea4cf9accc7143686d7974) is 8.0M, max 158.8M, 150.8M free. Apr 30 03:27:58.127107 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:27:58.134558 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:27:58.137836 systemd-modules-load[177]: Inserted module 'overlay' Apr 30 03:27:58.141466 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:27:58.152681 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:27:58.158292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:58.177402 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:27:58.185828 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:27:58.188130 kernel: Bridge firewalling registered Apr 30 03:27:58.188211 systemd-modules-load[177]: Inserted module 'br_netfilter' Apr 30 03:27:58.190513 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:27:58.200330 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:27:58.203439 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:27:58.210257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:27:58.225165 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:27:58.239515 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:27:58.242459 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:27:58.242715 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:27:58.248109 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:27:58.268267 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:27:58.275436 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:27:58.291027 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:27:58.299065 dracut-cmdline[210]: dracut-dracut-053 Apr 30 03:27:58.299065 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:27:58.351110 systemd-resolved[214]: Positive Trust Anchors: Apr 30 03:27:58.351130 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:27:58.351186 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:27:58.379296 systemd-resolved[214]: Defaulting to hostname 'linux'. Apr 30 03:27:58.383182 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:27:58.395061 kernel: SCSI subsystem initialized Apr 30 03:27:58.391793 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:27:58.403108 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:27:58.415116 kernel: iscsi: registered transport (tcp) Apr 30 03:27:58.436724 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:27:58.436823 kernel: QLogic iSCSI HBA Driver Apr 30 03:27:58.473254 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:27:58.482392 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:27:58.514623 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:27:58.514727 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:27:58.523269 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:27:58.569125 kernel: raid6: avx512x4 gen() 15351 MB/s Apr 30 03:27:58.588459 kernel: raid6: avx512x2 gen() 12180 MB/s Apr 30 03:27:58.608125 kernel: raid6: avx512x1 gen() 12108 MB/s Apr 30 03:27:58.630115 kernel: raid6: avx2x4 gen() 16242 MB/s Apr 30 03:27:58.649121 kernel: raid6: avx2x2 gen() 17056 MB/s Apr 30 03:27:58.669669 kernel: raid6: avx2x1 gen() 13441 MB/s Apr 30 03:27:58.669762 kernel: raid6: using algorithm avx2x2 gen() 17056 MB/s Apr 30 03:27:58.692575 kernel: raid6: .... xor() 20023 MB/s, rmw enabled Apr 30 03:27:58.692672 kernel: raid6: using avx512x2 recovery algorithm Apr 30 03:27:58.715127 kernel: xor: automatically using best checksumming function avx Apr 30 03:27:58.873124 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:27:58.884955 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:27:58.901332 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:27:58.923590 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 30 03:27:58.931115 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:27:58.956353 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:27:58.976204 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Apr 30 03:27:59.008477 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:27:59.024381 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:27:59.074521 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:27:59.087344 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:27:59.110233 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:27:59.120285 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:27:59.127312 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:27:59.133853 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:27:59.143401 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:27:59.170154 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:27:59.193113 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:27:59.200137 kernel: hv_vmbus: Vmbus version:5.2 Apr 30 03:27:59.214114 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 03:27:59.229852 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:27:59.254471 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 03:27:59.254506 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 03:27:59.229977 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:27:59.244095 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:27:59.244510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:27:59.244668 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:59.244984 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:59.280207 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 30 03:27:59.280568 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:59.293215 kernel: PTP clock support registered Apr 30 03:27:59.296760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:27:59.305292 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:27:59.296917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:59.312477 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:27:59.311534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:59.327082 kernel: AES CTR mode by8 optimization enabled Apr 30 03:27:59.327172 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 03:27:59.328127 kernel: hv_vmbus: registering driver hv_utils Apr 30 03:27:59.328175 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 03:27:59.329110 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 03:27:59.329152 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 03:27:59.990269 systemd-resolved[214]: Clock change detected. Flushing caches. Apr 30 03:28:00.034032 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 03:28:00.034066 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 30 03:28:00.034079 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 03:28:00.034272 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 03:28:00.034287 kernel: scsi host1: storvsc_host_t Apr 30 03:28:00.034422 kernel: scsi host0: storvsc_host_t Apr 30 03:28:00.034551 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 03:28:00.038215 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 03:28:00.046205 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 03:28:00.060648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:00.078289 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:00.092758 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 03:28:00.095570 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:28:00.095593 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 03:28:00.114288 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 03:28:00.133336 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 03:28:00.133567 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:28:00.133745 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 03:28:00.133916 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 03:28:00.134080 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:00.134100 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:28:00.124350 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:00.153247 kernel: hv_netvsc 6045bde1-531f-6045-bde1-531f6045bde1 eth0: VF slot 1 added Apr 30 03:28:00.164264 kernel: hv_vmbus: registering driver hv_pci Apr 30 03:28:00.169253 kernel: hv_pci 0f8999c8-f57a-4e62-b62a-4fba12bc225f: PCI VMBus probing: Using version 0x10004 Apr 30 03:28:00.216763 kernel: hv_pci 0f8999c8-f57a-4e62-b62a-4fba12bc225f: PCI host bridge to bus f57a:00 Apr 30 03:28:00.216972 kernel: pci_bus f57a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 30 03:28:00.217158 kernel: pci_bus f57a:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 03:28:00.217341 kernel: pci f57a:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 30 03:28:00.217521 kernel: pci f57a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:28:00.217697 kernel: pci f57a:00:02.0: enabling Extended Tags Apr 30 03:28:00.217868 kernel: pci f57a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f57a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 30 03:28:00.218038 kernel: pci_bus f57a:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 03:28:00.218417 kernel: pci f57a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:28:00.382971 kernel: mlx5_core f57a:00:02.0: enabling device (0000 -> 0002) Apr 30 03:28:00.612590 kernel: mlx5_core f57a:00:02.0: firmware version: 14.30.5000 Apr 30 03:28:00.612811 kernel: hv_netvsc 6045bde1-531f-6045-bde1-531f6045bde1 eth0: VF registering: eth1 Apr 30 03:28:00.612975 kernel: mlx5_core f57a:00:02.0 eth1: joined to eth0 Apr 30 03:28:00.613156 kernel: mlx5_core f57a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 03:28:00.622221 kernel: mlx5_core f57a:00:02.0 enP62842s1: renamed from eth1 Apr 30 03:28:00.687001 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 03:28:00.730224 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (451) Apr 30 03:28:00.757703 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 03:28:00.769964 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 03:28:00.786228 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (446) Apr 30 03:28:00.803318 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 03:28:00.807704 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 03:28:00.827383 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:28:00.841208 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:00.848210 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:01.856342 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:01.856414 disk-uuid[604]: The operation has completed successfully. Apr 30 03:28:01.948330 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:28:01.948448 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:28:01.964382 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:28:01.970917 sh[690]: Success Apr 30 03:28:02.009826 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:28:02.196405 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:28:02.215357 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:28:02.220363 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:28:02.248206 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:28:02.248266 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:02.253066 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:28:02.255848 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:28:02.258260 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:28:02.544511 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:28:02.548209 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:28:02.557463 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:28:02.565393 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:28:02.588337 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:02.588417 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:02.588447 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:02.608312 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:02.618875 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:28:02.623890 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:02.630903 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:28:02.644446 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:28:02.659174 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:02.670004 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:02.690806 systemd-networkd[874]: lo: Link UP Apr 30 03:28:02.690817 systemd-networkd[874]: lo: Gained carrier Apr 30 03:28:02.693075 systemd-networkd[874]: Enumeration completed Apr 30 03:28:02.693398 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:02.694644 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:02.694648 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:02.696996 systemd[1]: Reached target network.target - Network. Apr 30 03:28:02.763209 kernel: mlx5_core f57a:00:02.0 enP62842s1: Link up Apr 30 03:28:02.795870 kernel: hv_netvsc 6045bde1-531f-6045-bde1-531f6045bde1 eth0: Data path switched to VF: enP62842s1 Apr 30 03:28:02.795420 systemd-networkd[874]: enP62842s1: Link UP Apr 30 03:28:02.795550 systemd-networkd[874]: eth0: Link UP Apr 30 03:28:02.795759 systemd-networkd[874]: eth0: Gained carrier Apr 30 03:28:02.795775 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:02.808173 systemd-networkd[874]: enP62842s1: Gained carrier Apr 30 03:28:02.825236 systemd-networkd[874]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:28:03.510976 ignition[852]: Ignition 2.19.0 Apr 30 03:28:03.510988 ignition[852]: Stage: fetch-offline Apr 30 03:28:03.512587 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:03.511032 ignition[852]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:03.511042 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:03.511149 ignition[852]: parsed url from cmdline: "" Apr 30 03:28:03.511154 ignition[852]: no config URL provided Apr 30 03:28:03.511161 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:03.511171 ignition[852]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:03.511178 ignition[852]: failed to fetch config: resource requires networking Apr 30 03:28:03.511566 ignition[852]: Ignition finished successfully Apr 30 03:28:03.546393 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:28:03.562845 ignition[882]: Ignition 2.19.0 Apr 30 03:28:03.562857 ignition[882]: Stage: fetch Apr 30 03:28:03.563078 ignition[882]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:03.563091 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:03.563212 ignition[882]: parsed url from cmdline: "" Apr 30 03:28:03.563216 ignition[882]: no config URL provided Apr 30 03:28:03.563221 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:03.563228 ignition[882]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:03.563248 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 03:28:03.640496 ignition[882]: GET result: OK Apr 30 03:28:03.640635 ignition[882]: config has been read from IMDS userdata Apr 30 03:28:03.640676 ignition[882]: parsing config with SHA512: fc741d48f3eedcd7830a80836667e390dcd8277938d6d053ac63cbdbda0890e7e6e4b781d84d75b43c51d88bb3b3c319c58056a5a88197f38713d29fb715ef12 Apr 30 03:28:03.646313 unknown[882]: fetched base config from "system" Apr 30 03:28:03.646333 unknown[882]: fetched base config from "system" Apr 30 03:28:03.646345 unknown[882]: fetched user config from "azure" Apr 30 03:28:03.651495 ignition[882]: fetch: fetch complete Apr 30 03:28:03.651506 ignition[882]: fetch: fetch passed Apr 30 03:28:03.653767 ignition[882]: Ignition finished successfully Apr 30 03:28:03.657576 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:28:03.671487 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:28:03.689877 ignition[888]: Ignition 2.19.0 Apr 30 03:28:03.689888 ignition[888]: Stage: kargs Apr 30 03:28:03.692341 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:28:03.690122 ignition[888]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:03.690136 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:03.691003 ignition[888]: kargs: kargs passed Apr 30 03:28:03.691050 ignition[888]: Ignition finished successfully Apr 30 03:28:03.707458 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:28:03.725279 ignition[894]: Ignition 2.19.0 Apr 30 03:28:03.725291 ignition[894]: Stage: disks Apr 30 03:28:03.727363 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:28:03.725514 ignition[894]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:03.725527 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:03.726405 ignition[894]: disks: disks passed Apr 30 03:28:03.737307 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:03.726450 ignition[894]: Ignition finished successfully Apr 30 03:28:03.747471 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:28:03.750752 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:03.755553 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:03.758659 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:03.777387 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:28:03.831959 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 03:28:03.838527 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:28:03.854122 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:28:03.945206 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:28:03.945759 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:28:03.948841 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:03.990343 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:03.996690 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:28:04.002210 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (913) Apr 30 03:28:04.007369 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:28:04.015730 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:04.023236 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:04.023306 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:04.024486 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:28:04.024537 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:04.034204 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:04.041454 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:04.044199 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:28:04.058401 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:28:04.542748 systemd-networkd[874]: eth0: Gained IPv6LL Apr 30 03:28:04.543165 systemd-networkd[874]: enP62842s1: Gained IPv6LL Apr 30 03:28:04.593511 coreos-metadata[915]: Apr 30 03:28:04.593 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 03:28:04.600483 coreos-metadata[915]: Apr 30 03:28:04.600 INFO Fetch successful Apr 30 03:28:04.603360 coreos-metadata[915]: Apr 30 03:28:04.600 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 03:28:04.613245 coreos-metadata[915]: Apr 30 03:28:04.613 INFO Fetch successful Apr 30 03:28:04.631797 coreos-metadata[915]: Apr 30 03:28:04.629 INFO wrote hostname ci-4081.3.3-a-afe39379c7 to /sysroot/etc/hostname Apr 30 03:28:04.637532 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:04.644542 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:28:04.670080 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:28:04.679319 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:28:04.684151 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:28:05.448545 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:05.460342 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:28:05.467435 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:28:05.484565 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:05.484138 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:28:05.497656 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:28:05.523164 ignition[1038]: INFO : Ignition 2.19.0 Apr 30 03:28:05.523164 ignition[1038]: INFO : Stage: mount Apr 30 03:28:05.531035 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:05.531035 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:05.531035 ignition[1038]: INFO : mount: mount passed Apr 30 03:28:05.531035 ignition[1038]: INFO : Ignition finished successfully Apr 30 03:28:05.525246 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:28:05.542180 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:28:05.551370 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:05.568256 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1047) Apr 30 03:28:05.582422 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:05.582517 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:05.585214 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:05.591207 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:05.592860 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:05.619009 ignition[1063]: INFO : Ignition 2.19.0 Apr 30 03:28:05.619009 ignition[1063]: INFO : Stage: files Apr 30 03:28:05.623516 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:05.623516 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:05.623516 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:28:05.652724 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:28:05.652724 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:28:05.737989 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:28:05.742452 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:28:05.742452 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:28:05.738541 unknown[1063]: wrote ssh authorized keys file for user: core Apr 30 03:28:05.752986 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:05.758484 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:28:05.815848 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:28:09.104043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:09.148263 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:09.148263 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:09.148263 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:09.148263 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:28:09.170825 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:28:09.170825 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:28:09.183996 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Apr 30 03:28:09.781469 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:28:10.744909 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:28:10.744909 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:28:10.773121 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: files passed Apr 30 03:28:10.782000 ignition[1063]: INFO : Ignition finished successfully Apr 30 03:28:10.774940 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:28:10.812789 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:28:10.822587 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:28:10.826137 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:28:10.829722 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:28:10.843649 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:10.843649 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:10.854292 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:10.859977 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:10.863914 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:28:10.880431 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:28:10.912984 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:28:10.913099 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:28:10.923181 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:28:10.926653 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:28:10.933276 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:28:10.947452 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:28:10.963621 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:10.973417 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:28:10.986392 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:10.993312 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:10.996825 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:28:11.004467 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:28:11.004656 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:11.013877 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:28:11.016688 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:28:11.021846 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:28:11.027149 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:11.033007 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:11.039140 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:28:11.045061 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:11.055856 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:28:11.061895 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:28:11.067370 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:28:11.072003 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:28:11.072216 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:11.080424 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:11.086724 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:11.087908 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:28:11.090134 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:11.096564 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:28:11.096713 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:11.113879 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:28:11.117104 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:11.125099 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:28:11.125311 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:28:11.133076 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:28:11.133272 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:11.147407 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:28:11.155455 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:28:11.160478 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:28:11.160686 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:11.173984 ignition[1116]: INFO : Ignition 2.19.0 Apr 30 03:28:11.173984 ignition[1116]: INFO : Stage: umount Apr 30 03:28:11.173984 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:11.173984 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:11.173984 ignition[1116]: INFO : umount: umount passed Apr 30 03:28:11.173984 ignition[1116]: INFO : Ignition finished successfully Apr 30 03:28:11.164476 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:28:11.164637 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:11.174550 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:28:11.174663 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:28:11.180365 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:28:11.185131 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:28:11.188220 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:28:11.188279 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:28:11.194382 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:28:11.196441 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:28:11.202875 systemd[1]: Stopped target network.target - Network. Apr 30 03:28:11.231416 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:28:11.231509 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:11.240467 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:28:11.245121 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:28:11.250254 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:11.257758 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:28:11.258765 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:28:11.259259 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:28:11.259304 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:11.259781 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:28:11.259814 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:11.260246 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:28:11.260294 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:28:11.260719 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:28:11.260752 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:11.261310 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:28:11.261669 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:28:11.263431 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:28:11.264070 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:28:11.264156 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:28:11.264711 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:28:11.264783 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:28:11.267067 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:28:11.267161 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:11.292634 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:28:11.292769 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:28:11.294241 systemd-networkd[874]: eth0: DHCPv6 lease lost Apr 30 03:28:11.299369 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:28:11.299484 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:28:11.312917 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:28:11.313000 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:11.334407 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:28:11.341429 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:28:11.341519 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:11.347582 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:28:11.347638 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:11.361027 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:28:11.362977 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:11.394827 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:28:11.394915 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:11.404851 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:11.420720 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:28:11.421041 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:11.430955 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:28:11.431040 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:11.439339 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:28:11.439398 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:11.445069 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:28:11.445133 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:11.468903 kernel: hv_netvsc 6045bde1-531f-6045-bde1-531f6045bde1 eth0: Data path switched from VF: enP62842s1 Apr 30 03:28:11.451335 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:28:11.451382 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:11.459301 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:11.459367 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:11.487399 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:28:11.490542 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:28:11.490629 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:11.497458 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:28:11.497523 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:11.514013 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:28:11.514097 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:11.520241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:11.520299 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:11.529976 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:28:11.530099 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:28:11.540551 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:28:11.540663 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:28:11.550734 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:28:11.557439 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:28:11.566400 systemd[1]: Switching root. Apr 30 03:28:11.625897 systemd-journald[176]: Journal stopped Apr 30 03:27:58.109940 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:27:58.109978 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:27:58.109992 kernel: BIOS-provided physical RAM map: Apr 30 03:27:58.110003 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:27:58.110012 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 30 03:27:58.110022 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Apr 30 03:27:58.110034 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Apr 30 03:27:58.110048 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Apr 30 03:27:58.110058 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 30 03:27:58.110070 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 30 03:27:58.110081 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 30 03:27:58.110111 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 30 03:27:58.110121 kernel: printk: bootconsole [earlyser0] enabled Apr 30 03:27:58.110131 kernel: NX (Execute Disable) protection: active Apr 30 03:27:58.110148 kernel: APIC: Static calls initialized Apr 30 03:27:58.110158 kernel: efi: EFI v2.7 by Microsoft Apr 30 03:27:58.110169 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Apr 30 03:27:58.110179 kernel: SMBIOS 3.1.0 present. Apr 30 03:27:58.110189 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Apr 30 03:27:58.110200 kernel: Hypervisor detected: Microsoft Hyper-V Apr 30 03:27:58.110211 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 30 03:27:58.110222 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Apr 30 03:27:58.110234 kernel: Hyper-V: Nested features: 0x1e0101 Apr 30 03:27:58.110245 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 30 03:27:58.110261 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 30 03:27:58.110273 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:27:58.110286 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 03:27:58.110299 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 30 03:27:58.110311 kernel: tsc: Detected 2593.908 MHz processor Apr 30 03:27:58.110324 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:27:58.110337 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:27:58.110349 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 30 03:27:58.110362 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:27:58.110377 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:27:58.110390 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 30 03:27:58.110401 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 30 03:27:58.110414 kernel: Using GB pages for direct mapping Apr 30 03:27:58.110426 kernel: Secure boot disabled Apr 30 03:27:58.110439 kernel: ACPI: Early table checksum verification disabled Apr 30 03:27:58.110450 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 30 03:27:58.110467 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110481 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110494 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Apr 30 03:27:58.110507 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 30 03:27:58.110519 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110532 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110544 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110560 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110574 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110588 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110601 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 03:27:58.110613 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 30 03:27:58.110628 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Apr 30 03:27:58.110642 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 30 03:27:58.110656 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 30 03:27:58.110673 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 30 03:27:58.110685 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 30 03:27:58.110698 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 30 03:27:58.110708 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Apr 30 03:27:58.110720 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 30 03:27:58.110731 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Apr 30 03:27:58.110743 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:27:58.110755 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:27:58.110768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 30 03:27:58.110784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 30 03:27:58.110795 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 30 03:27:58.110806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 30 03:27:58.111699 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 30 03:27:58.111720 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 30 03:27:58.111736 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 30 03:27:58.111750 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 30 03:27:58.111764 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 30 03:27:58.111778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 30 03:27:58.111797 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Apr 30 03:27:58.111811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Apr 30 03:27:58.111826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Apr 30 03:27:58.111840 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Apr 30 03:27:58.111853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Apr 30 03:27:58.111867 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Apr 30 03:27:58.111881 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 30 03:27:58.111895 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 30 03:27:58.111909 kernel: Zone ranges: Apr 30 03:27:58.111927 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:27:58.111940 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 03:27:58.111954 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:27:58.111968 kernel: Movable zone start for each node Apr 30 03:27:58.111982 kernel: Early memory node ranges Apr 30 03:27:58.111996 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:27:58.112012 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Apr 30 03:27:58.112025 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 30 03:27:58.112038 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 03:27:58.112056 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 30 03:27:58.112070 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:27:58.112083 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:27:58.112119 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Apr 30 03:27:58.112133 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 30 03:27:58.112146 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 30 03:27:58.112160 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:27:58.112173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:27:58.112187 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:27:58.112206 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 30 03:27:58.112220 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:27:58.112233 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 30 03:27:58.112246 kernel: Booting paravirtualized kernel on Hyper-V Apr 30 03:27:58.112260 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:27:58.112275 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:27:58.112288 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:27:58.112302 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:27:58.112316 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:27:58.112332 kernel: Hyper-V: PV spinlocks enabled Apr 30 03:27:58.112346 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:27:58.112362 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:27:58.112376 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:27:58.112390 kernel: random: crng init done Apr 30 03:27:58.112405 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 30 03:27:58.112418 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:27:58.112431 kernel: Fallback order for Node 0: 0 Apr 30 03:27:58.112449 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Apr 30 03:27:58.112474 kernel: Policy zone: Normal Apr 30 03:27:58.112491 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:27:58.112506 kernel: software IO TLB: area num 2. Apr 30 03:27:58.112521 kernel: Memory: 8069608K/8387460K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 317592K reserved, 0K cma-reserved) Apr 30 03:27:58.112535 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:27:58.112548 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:27:58.112562 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:27:58.112576 kernel: Dynamic Preempt: voluntary Apr 30 03:27:58.112590 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:27:58.112605 kernel: rcu: RCU event tracing is enabled. Apr 30 03:27:58.112623 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:27:58.112637 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:27:58.112651 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:27:58.112665 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:27:58.112679 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:27:58.112696 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:27:58.112710 kernel: Using NULL legacy PIC Apr 30 03:27:58.112723 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 30 03:27:58.112737 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:27:58.112752 kernel: Console: colour dummy device 80x25 Apr 30 03:27:58.112768 kernel: printk: console [tty1] enabled Apr 30 03:27:58.112784 kernel: printk: console [ttyS0] enabled Apr 30 03:27:58.112801 kernel: printk: bootconsole [earlyser0] disabled Apr 30 03:27:58.112816 kernel: ACPI: Core revision 20230628 Apr 30 03:27:58.112832 kernel: Failed to register legacy timer interrupt Apr 30 03:27:58.112852 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:27:58.112867 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 03:27:58.112882 kernel: Hyper-V: Using IPI hypercalls Apr 30 03:27:58.112899 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 30 03:27:58.112915 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 30 03:27:58.112931 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 30 03:27:58.112947 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 30 03:27:58.112962 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 30 03:27:58.112979 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 30 03:27:58.112999 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Apr 30 03:27:58.113015 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:27:58.113030 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:27:58.113045 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:27:58.113059 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:27:58.113074 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:27:58.113103 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:27:58.113116 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 03:27:58.113129 kernel: RETBleed: Vulnerable Apr 30 03:27:58.113147 kernel: Speculative Store Bypass: Vulnerable Apr 30 03:27:58.113160 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:27:58.113173 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:27:58.113187 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:27:58.113200 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:27:58.113213 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:27:58.113227 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 03:27:58.113241 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 03:27:58.113255 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 03:27:58.113269 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:27:58.113282 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 30 03:27:58.113298 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 30 03:27:58.113311 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 30 03:27:58.113325 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 30 03:27:58.113338 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:27:58.113351 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:27:58.113364 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:27:58.113379 kernel: landlock: Up and running. Apr 30 03:27:58.113394 kernel: SELinux: Initializing. Apr 30 03:27:58.113409 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:27:58.113425 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:27:58.113439 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 03:27:58.113454 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:27:58.113471 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:27:58.113485 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:27:58.113499 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 03:27:58.113512 kernel: signal: max sigframe size: 3632 Apr 30 03:27:58.113526 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:27:58.113541 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:27:58.113554 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:27:58.113568 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:27:58.113585 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:27:58.113599 kernel: .... node #0, CPUs: #1 Apr 30 03:27:58.113613 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 30 03:27:58.113628 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:27:58.113641 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:27:58.113655 kernel: smpboot: Max logical packages: 1 Apr 30 03:27:58.113669 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Apr 30 03:27:58.113682 kernel: devtmpfs: initialized Apr 30 03:27:58.113697 kernel: x86/mm: Memory block size: 128MB Apr 30 03:27:58.113713 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 30 03:27:58.113727 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:27:58.113741 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:27:58.113754 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:27:58.113768 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:27:58.113782 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:27:58.113796 kernel: audit: type=2000 audit(1745983676.028:1): state=initialized audit_enabled=0 res=1 Apr 30 03:27:58.113810 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:27:58.113824 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:27:58.113840 kernel: cpuidle: using governor menu Apr 30 03:27:58.113854 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:27:58.113867 kernel: dca service started, version 1.12.1 Apr 30 03:27:58.113881 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Apr 30 03:27:58.113895 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:27:58.113909 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:27:58.113923 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:27:58.113937 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:27:58.113951 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:27:58.113967 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:27:58.113982 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:27:58.113996 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:27:58.114010 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:27:58.114023 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:27:58.114037 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:27:58.114051 kernel: ACPI: Interpreter enabled Apr 30 03:27:58.114066 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:27:58.114079 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:27:58.114115 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:27:58.114129 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 03:27:58.114144 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 30 03:27:58.114158 kernel: iommu: Default domain type: Translated Apr 30 03:27:58.114172 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:27:58.114186 kernel: efivars: Registered efivars operations Apr 30 03:27:58.114200 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:27:58.114213 kernel: PCI: System does not support PCI Apr 30 03:27:58.114227 kernel: vgaarb: loaded Apr 30 03:27:58.114246 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 30 03:27:58.114261 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:27:58.114275 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:27:58.114289 kernel: pnp: PnP ACPI init Apr 30 03:27:58.114303 kernel: pnp: PnP ACPI: found 3 devices Apr 30 03:27:58.114318 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:27:58.114332 kernel: NET: Registered PF_INET protocol family Apr 30 03:27:58.114347 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:27:58.114362 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 30 03:27:58.114379 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:27:58.114394 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:27:58.114408 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 03:27:58.114422 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 30 03:27:58.114437 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:27:58.114451 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:27:58.114465 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:27:58.114477 kernel: NET: Registered PF_XDP protocol family Apr 30 03:27:58.114492 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:27:58.114511 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 03:27:58.114527 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Apr 30 03:27:58.114544 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:27:58.114560 kernel: Initialise system trusted keyrings Apr 30 03:27:58.114576 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 30 03:27:58.114593 kernel: Key type asymmetric registered Apr 30 03:27:58.114609 kernel: Asymmetric key parser 'x509' registered Apr 30 03:27:58.114625 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:27:58.114641 kernel: io scheduler mq-deadline registered Apr 30 03:27:58.114660 kernel: io scheduler kyber registered Apr 30 03:27:58.114675 kernel: io scheduler bfq registered Apr 30 03:27:58.114690 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:27:58.114705 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:27:58.114719 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:27:58.114734 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 03:27:58.114748 kernel: i8042: PNP: No PS/2 controller found. Apr 30 03:27:58.114934 kernel: rtc_cmos 00:02: registered as rtc0 Apr 30 03:27:58.115061 kernel: rtc_cmos 00:02: setting system clock to 2025-04-30T03:27:57 UTC (1745983677) Apr 30 03:27:58.115199 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 30 03:27:58.115217 kernel: intel_pstate: CPU model not supported Apr 30 03:27:58.115232 kernel: efifb: probing for efifb Apr 30 03:27:58.115247 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 03:27:58.115261 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 03:27:58.115275 kernel: efifb: scrolling: redraw Apr 30 03:27:58.115290 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 03:27:58.115308 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:27:58.115323 kernel: fb0: EFI VGA frame buffer device Apr 30 03:27:58.115337 kernel: pstore: Using crash dump compression: deflate Apr 30 03:27:58.115351 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:27:58.115366 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:27:58.115381 kernel: Segment Routing with IPv6 Apr 30 03:27:58.115395 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:27:58.115410 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:27:58.115425 kernel: Key type dns_resolver registered Apr 30 03:27:58.115438 kernel: IPI shorthand broadcast: enabled Apr 30 03:27:58.115455 kernel: sched_clock: Marking stable (1004003200, 56629000)->(1330200400, -269568200) Apr 30 03:27:58.115469 kernel: registered taskstats version 1 Apr 30 03:27:58.115484 kernel: Loading compiled-in X.509 certificates Apr 30 03:27:58.115498 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:27:58.115513 kernel: Key type .fscrypt registered Apr 30 03:27:58.115527 kernel: Key type fscrypt-provisioning registered Apr 30 03:27:58.115541 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:27:58.115554 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:27:58.115572 kernel: ima: No architecture policies found Apr 30 03:27:58.115587 kernel: clk: Disabling unused clocks Apr 30 03:27:58.115600 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:27:58.115615 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:27:58.115629 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:27:58.115643 kernel: Run /init as init process Apr 30 03:27:58.115657 kernel: with arguments: Apr 30 03:27:58.115670 kernel: /init Apr 30 03:27:58.115684 kernel: with environment: Apr 30 03:27:58.115701 kernel: HOME=/ Apr 30 03:27:58.115716 kernel: TERM=linux Apr 30 03:27:58.115730 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:27:58.115747 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:27:58.115764 systemd[1]: Detected virtualization microsoft. Apr 30 03:27:58.115780 systemd[1]: Detected architecture x86-64. Apr 30 03:27:58.115793 systemd[1]: Running in initrd. Apr 30 03:27:58.115807 systemd[1]: No hostname configured, using default hostname. Apr 30 03:27:58.115824 systemd[1]: Hostname set to . Apr 30 03:27:58.115839 systemd[1]: Initializing machine ID from random generator. Apr 30 03:27:58.115854 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:27:58.115870 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:27:58.115885 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:27:58.115902 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:27:58.115916 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:27:58.115928 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:27:58.115945 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:27:58.115962 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:27:58.115975 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:27:58.115983 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:27:58.115992 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:27:58.116001 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:27:58.116010 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:27:58.116027 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:27:58.116042 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:27:58.116057 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:27:58.116072 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:27:58.116991 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:27:58.117013 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:27:58.117025 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:27:58.117035 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:27:58.117050 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:27:58.117059 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:27:58.117070 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:27:58.117079 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:27:58.117125 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:27:58.117137 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:27:58.117147 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:27:58.117156 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:27:58.117191 systemd-journald[176]: Collecting audit messages is disabled. Apr 30 03:27:58.117219 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:58.117229 systemd-journald[176]: Journal started Apr 30 03:27:58.117254 systemd-journald[176]: Runtime Journal (/run/log/journal/79624f5138ea4cf9accc7143686d7974) is 8.0M, max 158.8M, 150.8M free. Apr 30 03:27:58.127107 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:27:58.134558 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:27:58.137836 systemd-modules-load[177]: Inserted module 'overlay' Apr 30 03:27:58.141466 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:27:58.152681 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:27:58.158292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:58.177402 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:27:58.185828 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:27:58.188130 kernel: Bridge firewalling registered Apr 30 03:27:58.188211 systemd-modules-load[177]: Inserted module 'br_netfilter' Apr 30 03:27:58.190513 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:27:58.200330 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:27:58.203439 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:27:58.210257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:27:58.225165 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:27:58.239515 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:27:58.242459 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:27:58.242715 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:27:58.248109 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:27:58.268267 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:27:58.275436 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:27:58.291027 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:27:58.299065 dracut-cmdline[210]: dracut-dracut-053 Apr 30 03:27:58.299065 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:27:58.351110 systemd-resolved[214]: Positive Trust Anchors: Apr 30 03:27:58.351130 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:27:58.351186 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:27:58.379296 systemd-resolved[214]: Defaulting to hostname 'linux'. Apr 30 03:27:58.383182 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:27:58.395061 kernel: SCSI subsystem initialized Apr 30 03:27:58.391793 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:27:58.403108 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:27:58.415116 kernel: iscsi: registered transport (tcp) Apr 30 03:27:58.436724 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:27:58.436823 kernel: QLogic iSCSI HBA Driver Apr 30 03:27:58.473254 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:27:58.482392 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:27:58.514623 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:27:58.514727 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:27:58.523269 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:27:58.569125 kernel: raid6: avx512x4 gen() 15351 MB/s Apr 30 03:27:58.588459 kernel: raid6: avx512x2 gen() 12180 MB/s Apr 30 03:27:58.608125 kernel: raid6: avx512x1 gen() 12108 MB/s Apr 30 03:27:58.630115 kernel: raid6: avx2x4 gen() 16242 MB/s Apr 30 03:27:58.649121 kernel: raid6: avx2x2 gen() 17056 MB/s Apr 30 03:27:58.669669 kernel: raid6: avx2x1 gen() 13441 MB/s Apr 30 03:27:58.669762 kernel: raid6: using algorithm avx2x2 gen() 17056 MB/s Apr 30 03:27:58.692575 kernel: raid6: .... xor() 20023 MB/s, rmw enabled Apr 30 03:27:58.692672 kernel: raid6: using avx512x2 recovery algorithm Apr 30 03:27:58.715127 kernel: xor: automatically using best checksumming function avx Apr 30 03:27:58.873124 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:27:58.884955 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:27:58.901332 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:27:58.923590 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 30 03:27:58.931115 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:27:58.956353 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:27:58.976204 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Apr 30 03:27:59.008477 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:27:59.024381 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:27:59.074521 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:27:59.087344 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:27:59.110233 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:27:59.120285 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:27:59.127312 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:27:59.133853 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:27:59.143401 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:27:59.170154 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:27:59.193113 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:27:59.200137 kernel: hv_vmbus: Vmbus version:5.2 Apr 30 03:27:59.214114 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 03:27:59.229852 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:27:59.254471 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 03:27:59.254506 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 03:27:59.229977 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:27:59.244095 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:27:59.244510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:27:59.244668 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:59.244984 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:59.280207 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 30 03:27:59.280568 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:59.293215 kernel: PTP clock support registered Apr 30 03:27:59.296760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:27:59.305292 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:27:59.296917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:59.312477 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:27:59.311534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:59.327082 kernel: AES CTR mode by8 optimization enabled Apr 30 03:27:59.327172 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 03:27:59.328127 kernel: hv_vmbus: registering driver hv_utils Apr 30 03:27:59.328175 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 03:27:59.329110 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 03:27:59.329152 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 03:27:59.990269 systemd-resolved[214]: Clock change detected. Flushing caches. Apr 30 03:28:00.034032 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 03:28:00.034066 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 30 03:28:00.034079 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 03:28:00.034272 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 03:28:00.034287 kernel: scsi host1: storvsc_host_t Apr 30 03:28:00.034422 kernel: scsi host0: storvsc_host_t Apr 30 03:28:00.034551 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 03:28:00.038215 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 03:28:00.046205 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 03:28:00.060648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:00.078289 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:00.092758 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 03:28:00.095570 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:28:00.095593 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 03:28:00.114288 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 03:28:00.133336 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 03:28:00.133567 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:28:00.133745 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 03:28:00.133916 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 03:28:00.134080 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:00.134100 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:28:00.124350 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:00.153247 kernel: hv_netvsc 6045bde1-531f-6045-bde1-531f6045bde1 eth0: VF slot 1 added Apr 30 03:28:00.164264 kernel: hv_vmbus: registering driver hv_pci Apr 30 03:28:00.169253 kernel: hv_pci 0f8999c8-f57a-4e62-b62a-4fba12bc225f: PCI VMBus probing: Using version 0x10004 Apr 30 03:28:00.216763 kernel: hv_pci 0f8999c8-f57a-4e62-b62a-4fba12bc225f: PCI host bridge to bus f57a:00 Apr 30 03:28:00.216972 kernel: pci_bus f57a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 30 03:28:00.217158 kernel: pci_bus f57a:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 03:28:00.217341 kernel: pci f57a:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 30 03:28:00.217521 kernel: pci f57a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:28:00.217697 kernel: pci f57a:00:02.0: enabling Extended Tags Apr 30 03:28:00.217868 kernel: pci f57a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f57a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 30 03:28:00.218038 kernel: pci_bus f57a:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 03:28:00.218417 kernel: pci f57a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 03:28:00.382971 kernel: mlx5_core f57a:00:02.0: enabling device (0000 -> 0002) Apr 30 03:28:00.612590 kernel: mlx5_core f57a:00:02.0: firmware version: 14.30.5000 Apr 30 03:28:00.612811 kernel: hv_netvsc 6045bde1-531f-6045-bde1-531f6045bde1 eth0: VF registering: eth1 Apr 30 03:28:00.612975 kernel: mlx5_core f57a:00:02.0 eth1: joined to eth0 Apr 30 03:28:00.613156 kernel: mlx5_core f57a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 03:28:00.622221 kernel: mlx5_core f57a:00:02.0 enP62842s1: renamed from eth1 Apr 30 03:28:00.687001 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 03:28:00.730224 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (451) Apr 30 03:28:00.757703 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 03:28:00.769964 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 03:28:00.786228 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (446) Apr 30 03:28:00.803318 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 03:28:00.807704 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 03:28:00.827383 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:28:00.841208 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:00.848210 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:01.856342 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:01.856414 disk-uuid[604]: The operation has completed successfully. Apr 30 03:28:01.948330 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:28:01.948448 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:28:01.964382 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:28:01.970917 sh[690]: Success Apr 30 03:28:02.009826 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:28:02.196405 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:28:02.215357 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:28:02.220363 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:28:02.248206 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:28:02.248266 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:02.253066 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:28:02.255848 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:28:02.258260 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:28:02.544511 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:28:02.548209 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:28:02.557463 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:28:02.565393 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:28:02.588337 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:02.588417 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:02.588447 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:02.608312 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:02.618875 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:28:02.623890 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:02.630903 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:28:02.644446 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:28:02.659174 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:02.670004 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:02.690806 systemd-networkd[874]: lo: Link UP Apr 30 03:28:02.690817 systemd-networkd[874]: lo: Gained carrier Apr 30 03:28:02.693075 systemd-networkd[874]: Enumeration completed Apr 30 03:28:02.693398 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:02.694644 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:02.694648 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:02.696996 systemd[1]: Reached target network.target - Network. Apr 30 03:28:02.763209 kernel: mlx5_core f57a:00:02.0 enP62842s1: Link up Apr 30 03:28:02.795870 kernel: hv_netvsc 6045bde1-531f-6045-bde1-531f6045bde1 eth0: Data path switched to VF: enP62842s1 Apr 30 03:28:02.795420 systemd-networkd[874]: enP62842s1: Link UP Apr 30 03:28:02.795550 systemd-networkd[874]: eth0: Link UP Apr 30 03:28:02.795759 systemd-networkd[874]: eth0: Gained carrier Apr 30 03:28:02.795775 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:02.808173 systemd-networkd[874]: enP62842s1: Gained carrier Apr 30 03:28:02.825236 systemd-networkd[874]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:28:03.510976 ignition[852]: Ignition 2.19.0 Apr 30 03:28:03.510988 ignition[852]: Stage: fetch-offline Apr 30 03:28:03.512587 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:03.511032 ignition[852]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:03.511042 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:03.511149 ignition[852]: parsed url from cmdline: "" Apr 30 03:28:03.511154 ignition[852]: no config URL provided Apr 30 03:28:03.511161 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:03.511171 ignition[852]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:03.511178 ignition[852]: failed to fetch config: resource requires networking Apr 30 03:28:03.511566 ignition[852]: Ignition finished successfully Apr 30 03:28:03.546393 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:28:03.562845 ignition[882]: Ignition 2.19.0 Apr 30 03:28:03.562857 ignition[882]: Stage: fetch Apr 30 03:28:03.563078 ignition[882]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:03.563091 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:03.563212 ignition[882]: parsed url from cmdline: "" Apr 30 03:28:03.563216 ignition[882]: no config URL provided Apr 30 03:28:03.563221 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:03.563228 ignition[882]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:03.563248 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 03:28:03.640496 ignition[882]: GET result: OK Apr 30 03:28:03.640635 ignition[882]: config has been read from IMDS userdata Apr 30 03:28:03.640676 ignition[882]: parsing config with SHA512: fc741d48f3eedcd7830a80836667e390dcd8277938d6d053ac63cbdbda0890e7e6e4b781d84d75b43c51d88bb3b3c319c58056a5a88197f38713d29fb715ef12 Apr 30 03:28:03.646313 unknown[882]: fetched base config from "system" Apr 30 03:28:03.646333 unknown[882]: fetched base config from "system" Apr 30 03:28:03.646345 unknown[882]: fetched user config from "azure" Apr 30 03:28:03.651495 ignition[882]: fetch: fetch complete Apr 30 03:28:03.651506 ignition[882]: fetch: fetch passed Apr 30 03:28:03.653767 ignition[882]: Ignition finished successfully Apr 30 03:28:03.657576 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:28:03.671487 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:28:03.689877 ignition[888]: Ignition 2.19.0 Apr 30 03:28:03.689888 ignition[888]: Stage: kargs Apr 30 03:28:03.692341 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:28:03.690122 ignition[888]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:03.690136 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:03.691003 ignition[888]: kargs: kargs passed Apr 30 03:28:03.691050 ignition[888]: Ignition finished successfully Apr 30 03:28:03.707458 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:28:03.725279 ignition[894]: Ignition 2.19.0 Apr 30 03:28:03.725291 ignition[894]: Stage: disks Apr 30 03:28:03.727363 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:28:03.725514 ignition[894]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:03.725527 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:03.726405 ignition[894]: disks: disks passed Apr 30 03:28:03.737307 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:03.726450 ignition[894]: Ignition finished successfully Apr 30 03:28:03.747471 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:28:03.750752 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:03.755553 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:03.758659 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:03.777387 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:28:03.831959 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 03:28:03.838527 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:28:03.854122 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:28:03.945206 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:28:03.945759 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:28:03.948841 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:03.990343 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:03.996690 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:28:04.002210 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (913) Apr 30 03:28:04.007369 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:28:04.015730 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:04.023236 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:04.023306 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:04.024486 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:28:04.024537 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:04.034204 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:04.041454 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:04.044199 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:28:04.058401 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:28:04.542748 systemd-networkd[874]: eth0: Gained IPv6LL Apr 30 03:28:04.543165 systemd-networkd[874]: enP62842s1: Gained IPv6LL Apr 30 03:28:04.593511 coreos-metadata[915]: Apr 30 03:28:04.593 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 03:28:04.600483 coreos-metadata[915]: Apr 30 03:28:04.600 INFO Fetch successful Apr 30 03:28:04.603360 coreos-metadata[915]: Apr 30 03:28:04.600 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 03:28:04.613245 coreos-metadata[915]: Apr 30 03:28:04.613 INFO Fetch successful Apr 30 03:28:04.631797 coreos-metadata[915]: Apr 30 03:28:04.629 INFO wrote hostname ci-4081.3.3-a-afe39379c7 to /sysroot/etc/hostname Apr 30 03:28:04.637532 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:04.644542 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:28:04.670080 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:28:04.679319 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:28:04.684151 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:28:05.448545 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:05.460342 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:28:05.467435 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:28:05.484565 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:05.484138 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:28:05.497656 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:28:05.523164 ignition[1038]: INFO : Ignition 2.19.0 Apr 30 03:28:05.523164 ignition[1038]: INFO : Stage: mount Apr 30 03:28:05.531035 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:05.531035 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:05.531035 ignition[1038]: INFO : mount: mount passed Apr 30 03:28:05.531035 ignition[1038]: INFO : Ignition finished successfully Apr 30 03:28:05.525246 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:28:05.542180 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:28:05.551370 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:05.568256 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1047) Apr 30 03:28:05.582422 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:05.582517 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:05.585214 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:05.591207 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:05.592860 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:05.619009 ignition[1063]: INFO : Ignition 2.19.0 Apr 30 03:28:05.619009 ignition[1063]: INFO : Stage: files Apr 30 03:28:05.623516 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:05.623516 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:05.623516 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:28:05.652724 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:28:05.652724 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:28:05.737989 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:28:05.742452 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:28:05.742452 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:28:05.738541 unknown[1063]: wrote ssh authorized keys file for user: core Apr 30 03:28:05.752986 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:05.758484 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:28:05.815848 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:28:09.104043 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:09.110352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:09.148263 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:09.148263 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:09.148263 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:09.148263 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:28:09.170825 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:28:09.170825 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:28:09.183996 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Apr 30 03:28:09.781469 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:28:10.744909 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:28:10.744909 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:28:10.773121 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:10.782000 ignition[1063]: INFO : files: files passed Apr 30 03:28:10.782000 ignition[1063]: INFO : Ignition finished successfully Apr 30 03:28:10.774940 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:28:10.812789 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:28:10.822587 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:28:10.826137 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:28:10.829722 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:28:10.843649 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:10.843649 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:10.854292 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:10.859977 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:10.863914 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:28:10.880431 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:28:10.912984 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:28:10.913099 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:28:10.923181 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:28:10.926653 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:28:10.933276 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:28:10.947452 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:28:10.963621 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:10.973417 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:28:10.986392 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:10.993312 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:10.996825 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:28:11.004467 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:28:11.004656 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:11.013877 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:28:11.016688 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:28:11.021846 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:28:11.027149 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:11.033007 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:11.039140 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:28:11.045061 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:11.055856 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:28:11.061895 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:28:11.067370 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:28:11.072003 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:28:11.072216 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:11.080424 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:11.086724 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:11.087908 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:28:11.090134 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:11.096564 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:28:11.096713 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:11.113879 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:28:11.117104 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:11.125099 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:28:11.125311 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:28:11.133076 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:28:11.133272 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:11.147407 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:28:11.155455 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:28:11.160478 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:28:11.160686 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:11.173984 ignition[1116]: INFO : Ignition 2.19.0 Apr 30 03:28:11.173984 ignition[1116]: INFO : Stage: umount Apr 30 03:28:11.173984 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:11.173984 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 03:28:11.173984 ignition[1116]: INFO : umount: umount passed Apr 30 03:28:11.173984 ignition[1116]: INFO : Ignition finished successfully Apr 30 03:28:11.164476 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:28:11.164637 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:11.174550 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:28:11.174663 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:28:11.180365 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:28:11.185131 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:28:11.188220 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:28:11.188279 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:28:11.194382 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:28:11.196441 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:28:11.202875 systemd[1]: Stopped target network.target - Network. Apr 30 03:28:11.231416 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:28:11.231509 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:11.240467 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:28:11.245121 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:28:11.250254 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:11.257758 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:28:11.258765 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:28:11.259259 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:28:11.259304 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:11.259781 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:28:11.259814 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:11.260246 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:28:11.260294 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:28:11.260719 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:28:11.260752 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:11.261310 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:28:11.261669 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:28:11.263431 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:28:11.264070 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:28:11.264156 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:28:11.264711 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:28:11.264783 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:28:11.267067 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:28:11.267161 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:11.292634 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:28:11.292769 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:28:11.294241 systemd-networkd[874]: eth0: DHCPv6 lease lost Apr 30 03:28:11.299369 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:28:11.299484 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:28:11.312917 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:28:11.313000 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:11.334407 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:28:11.341429 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:28:11.341519 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:11.347582 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:28:11.347638 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:11.361027 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:28:11.362977 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:11.394827 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:28:11.394915 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:11.404851 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:11.420720 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:28:11.421041 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:11.430955 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:28:11.431040 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:11.439339 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:28:11.439398 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:11.445069 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:28:11.445133 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:11.468903 kernel: hv_netvsc 6045bde1-531f-6045-bde1-531f6045bde1 eth0: Data path switched from VF: enP62842s1 Apr 30 03:28:11.451335 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:28:11.451382 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:11.459301 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:11.459367 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:11.487399 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:28:11.490542 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:28:11.490629 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:11.497458 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:28:11.497523 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:11.514013 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:28:11.514097 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:11.520241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:11.520299 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:11.529976 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:28:11.530099 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:28:11.540551 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:28:11.540663 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:28:11.550734 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:28:11.557439 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:28:11.566400 systemd[1]: Switching root. Apr 30 03:28:11.625897 systemd-journald[176]: Journal stopped Apr 30 03:28:16.483532 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Apr 30 03:28:16.483584 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:28:16.483606 kernel: SELinux: policy capability open_perms=1 Apr 30 03:28:16.483625 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:28:16.483641 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:28:16.483658 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:28:16.483677 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:28:16.483701 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:28:16.483724 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:28:16.483741 kernel: audit: type=1403 audit(1745983693.481:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:28:16.483761 systemd[1]: Successfully loaded SELinux policy in 156.249ms. Apr 30 03:28:16.483783 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.244ms. Apr 30 03:28:16.483805 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:16.483830 systemd[1]: Detected virtualization microsoft. Apr 30 03:28:16.483854 systemd[1]: Detected architecture x86-64. Apr 30 03:28:16.483870 systemd[1]: Detected first boot. Apr 30 03:28:16.483887 systemd[1]: Hostname set to . Apr 30 03:28:16.483899 systemd[1]: Initializing machine ID from random generator. Apr 30 03:28:16.483912 zram_generator::config[1159]: No configuration found. Apr 30 03:28:16.483929 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:28:16.483944 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:28:16.483958 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:28:16.483969 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:28:16.486785 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:28:16.486809 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:28:16.486824 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:28:16.486845 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:28:16.486859 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:28:16.486874 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:28:16.486890 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:28:16.486905 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:28:16.486922 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:16.486938 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:16.486954 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:28:16.486975 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:28:16.486990 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:28:16.487003 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:16.487016 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:28:16.487028 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:16.487040 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:28:16.487056 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:28:16.487066 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:16.487081 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:28:16.487092 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:16.487105 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:16.487118 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:16.487130 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:16.487141 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:28:16.487153 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:28:16.487166 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:16.487178 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:16.487212 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:16.487223 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:28:16.487237 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:28:16.487250 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:28:16.487266 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:28:16.487277 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:16.487290 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:28:16.487300 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:28:16.487313 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:28:16.487324 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:28:16.487338 systemd[1]: Reached target machines.target - Containers. Apr 30 03:28:16.487350 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:28:16.487364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:16.487375 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:16.487388 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:28:16.487400 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:16.487414 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:16.487424 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:16.487438 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:28:16.487448 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:16.487464 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:28:16.487476 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:28:16.487489 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:28:16.487500 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:28:16.487512 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:28:16.487522 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:16.487536 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:16.487546 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:28:16.487561 kernel: fuse: init (API version 7.39) Apr 30 03:28:16.487571 kernel: ACPI: bus type drm_connector registered Apr 30 03:28:16.487582 kernel: loop: module loaded Apr 30 03:28:16.487592 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:28:16.487604 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:16.487616 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:28:16.487626 systemd[1]: Stopped verity-setup.service. Apr 30 03:28:16.487640 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:16.487678 systemd-journald[1255]: Collecting audit messages is disabled. Apr 30 03:28:16.487705 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:28:16.487719 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:28:16.487730 systemd-journald[1255]: Journal started Apr 30 03:28:16.487757 systemd-journald[1255]: Runtime Journal (/run/log/journal/06ab2f477dca4b698a2f3005b2a674ab) is 8.0M, max 158.8M, 150.8M free. Apr 30 03:28:15.650161 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:28:15.778763 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 03:28:15.779155 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:28:16.499568 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:16.500173 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:28:16.503096 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:28:16.506348 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:28:16.509451 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:28:16.512359 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:28:16.516047 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:16.519951 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:28:16.520122 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:28:16.523845 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:16.524010 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:16.528536 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:16.528756 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:16.535162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:16.535401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:16.539102 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:28:16.539308 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:28:16.542769 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:16.542942 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:16.546385 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:16.549765 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:28:16.553579 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:28:16.574650 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:28:16.585270 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:28:16.597282 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:28:16.601295 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:28:16.601348 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:16.607988 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:28:16.619371 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:28:16.631428 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:28:16.634597 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:16.655374 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:28:16.659889 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:28:16.663446 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:16.669327 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:28:16.672780 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:16.677358 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:16.684329 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:28:16.689367 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:16.696816 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:16.708527 systemd-journald[1255]: Time spent on flushing to /var/log/journal/06ab2f477dca4b698a2f3005b2a674ab is 47.917ms for 958 entries. Apr 30 03:28:16.708527 systemd-journald[1255]: System Journal (/var/log/journal/06ab2f477dca4b698a2f3005b2a674ab) is 8.0M, max 2.6G, 2.6G free. Apr 30 03:28:16.878101 systemd-journald[1255]: Received client request to flush runtime journal. Apr 30 03:28:16.878219 kernel: loop0: detected capacity change from 0 to 140768 Apr 30 03:28:16.704512 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:28:16.712561 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:28:16.716622 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:28:16.725522 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:28:16.729684 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:28:16.744388 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:28:16.749394 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:28:16.761930 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:28:16.830501 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:16.880201 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:28:16.924424 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Apr 30 03:28:16.924463 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Apr 30 03:28:16.931104 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:16.941396 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:28:16.945940 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:28:16.947818 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:28:17.069209 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:28:17.077420 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:17.098857 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Apr 30 03:28:17.098883 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Apr 30 03:28:17.103098 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:17.309216 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:28:17.371215 kernel: loop1: detected capacity change from 0 to 31056 Apr 30 03:28:17.775245 kernel: loop2: detected capacity change from 0 to 205544 Apr 30 03:28:17.828226 kernel: loop3: detected capacity change from 0 to 142488 Apr 30 03:28:17.967447 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:28:17.976426 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:18.008407 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Apr 30 03:28:18.111092 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:18.129315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:18.201414 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:28:18.283260 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:28:18.291899 kernel: loop4: detected capacity change from 0 to 140768 Apr 30 03:28:18.288279 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:28:18.350496 kernel: loop5: detected capacity change from 0 to 31056 Apr 30 03:28:18.377225 kernel: loop6: detected capacity change from 0 to 205544 Apr 30 03:28:18.400970 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:28:18.406215 kernel: hv_vmbus: registering driver hyperv_fb Apr 30 03:28:18.416606 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 30 03:28:18.416706 kernel: hv_vmbus: registering driver hv_balloon Apr 30 03:28:18.416736 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 30 03:28:18.420202 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 30 03:28:18.426209 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:28:18.426277 kernel: loop7: detected capacity change from 0 to 142488 Apr 30 03:28:18.432404 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:28:18.452282 (sd-merge)[1355]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 30 03:28:18.453517 (sd-merge)[1355]: Merged extensions into '/usr'. Apr 30 03:28:18.469492 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:28:18.469512 systemd[1]: Reloading... Apr 30 03:28:18.656179 systemd-networkd[1331]: lo: Link UP Apr 30 03:28:18.656736 systemd-networkd[1331]: lo: Gained carrier Apr 30 03:28:18.666198 systemd-networkd[1331]: Enumeration completed Apr 30 03:28:18.670752 systemd-networkd[1331]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:18.672228 systemd-networkd[1331]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:18.727465 zram_generator::config[1417]: No configuration found. Apr 30 03:28:18.744165 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1325) Apr 30 03:28:18.757788 kernel: mlx5_core f57a:00:02.0 enP62842s1: Link up Apr 30 03:28:18.792421 kernel: hv_netvsc 6045bde1-531f-6045-bde1-531f6045bde1 eth0: Data path switched to VF: enP62842s1 Apr 30 03:28:18.795354 systemd-networkd[1331]: enP62842s1: Link UP Apr 30 03:28:18.797374 systemd-networkd[1331]: eth0: Link UP Apr 30 03:28:18.797499 systemd-networkd[1331]: eth0: Gained carrier Apr 30 03:28:18.797694 systemd-networkd[1331]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:18.803495 systemd-networkd[1331]: enP62842s1: Gained carrier Apr 30 03:28:18.821000 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Apr 30 03:28:18.834289 systemd-networkd[1331]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:28:18.982365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:19.061762 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 03:28:19.065641 systemd[1]: Reloading finished in 595 ms. Apr 30 03:28:19.101049 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:19.104814 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:28:19.146530 systemd[1]: Starting ensure-sysext.service... Apr 30 03:28:19.152307 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:28:19.163530 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:28:19.168692 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:19.177594 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:19.196642 systemd[1]: Reloading requested from client PID 1493 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:28:19.196660 systemd[1]: Reloading... Apr 30 03:28:19.202930 systemd-tmpfiles[1496]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:28:19.203453 systemd-tmpfiles[1496]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:28:19.207788 systemd-tmpfiles[1496]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:28:19.208902 systemd-tmpfiles[1496]: ACLs are not supported, ignoring. Apr 30 03:28:19.209120 systemd-tmpfiles[1496]: ACLs are not supported, ignoring. Apr 30 03:28:19.218058 systemd-tmpfiles[1496]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:19.218285 systemd-tmpfiles[1496]: Skipping /boot Apr 30 03:28:19.240165 systemd-tmpfiles[1496]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:19.240198 systemd-tmpfiles[1496]: Skipping /boot Apr 30 03:28:19.310213 zram_generator::config[1530]: No configuration found. Apr 30 03:28:19.442529 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:19.522504 systemd[1]: Reloading finished in 325 ms. Apr 30 03:28:19.541518 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:28:19.552857 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:28:19.558933 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:19.579585 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:19.603372 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:28:19.608569 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:28:19.627036 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:28:19.631460 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:19.642530 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:28:19.648518 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:19.649157 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:19.660528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:19.674490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:19.680688 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:19.685388 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:19.685707 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:19.687868 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:19.688946 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:19.704686 lvm[1601]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:19.709381 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:19.709666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:19.720610 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:19.727698 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:19.727921 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:19.729028 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:28:19.736638 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:19.737430 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:19.745603 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:19.745820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:19.768729 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:19.770976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:19.775789 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:19.788521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:19.806551 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:19.807779 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:19.856896 augenrules[1627]: No rules Apr 30 03:28:19.807959 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:28:19.861814 lvm[1635]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:19.808826 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:19.810081 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:28:19.810708 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:28:19.811626 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:19.811750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:19.812548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:19.812668 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:19.817548 systemd[1]: Finished ensure-sysext.service. Apr 30 03:28:19.821862 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:19.825354 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:28:19.825517 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:19.842496 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:19.842750 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:19.846868 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:19.847061 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:19.850745 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:19.853045 systemd-resolved[1606]: Positive Trust Anchors: Apr 30 03:28:19.853059 systemd-resolved[1606]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:19.853112 systemd-resolved[1606]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:19.853728 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:19.861044 systemd-networkd[1331]: eth0: Gained IPv6LL Apr 30 03:28:19.861560 systemd-networkd[1331]: enP62842s1: Gained IPv6LL Apr 30 03:28:19.870139 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:28:19.876807 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:28:19.882182 systemd-resolved[1606]: Using system hostname 'ci-4081.3.3-a-afe39379c7'. Apr 30 03:28:19.884630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:19.885882 systemd[1]: Reached target network.target - Network. Apr 30 03:28:19.886784 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:28:19.887286 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:19.897475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:20.703737 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:28:20.710059 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:28:21.704110 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:28:21.713987 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:28:21.720511 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:28:21.746911 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:28:21.750585 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:21.753871 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:28:21.757719 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:28:21.761226 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:28:21.764375 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:28:21.767759 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:28:21.771215 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:28:21.771267 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:21.773776 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:21.827727 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:28:21.833601 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:28:21.845354 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:28:21.849178 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:28:21.852297 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:21.854966 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:21.857634 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:21.857683 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:21.865324 systemd[1]: Starting chronyd.service - NTP client/server... Apr 30 03:28:21.870394 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:28:21.881662 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:28:21.892409 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:28:21.905402 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:28:21.916160 (chronyd)[1653]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 30 03:28:21.920704 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:28:21.924659 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:28:21.924724 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 30 03:28:21.926017 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 30 03:28:21.929209 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 30 03:28:21.936459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:21.942899 chronyd[1664]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 30 03:28:21.949427 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:28:21.952254 jq[1657]: false Apr 30 03:28:21.963415 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:28:21.971282 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:28:21.978257 chronyd[1664]: Timezone right/UTC failed leap second check, ignoring Apr 30 03:28:21.979034 chronyd[1664]: Loaded seccomp filter (level 2) Apr 30 03:28:21.985572 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:28:21.992391 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:28:22.005923 KVP[1661]: KVP starting; pid is:1661 Apr 30 03:28:22.008428 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:28:22.012063 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:28:22.013817 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:28:22.020428 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:28:22.024006 dbus-daemon[1656]: [system] SELinux support is enabled Apr 30 03:28:22.027438 KVP[1661]: KVP LIC Version: 3.1 Apr 30 03:28:22.028458 kernel: hv_utils: KVP IC version 4.0 Apr 30 03:28:22.029303 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:28:22.032387 extend-filesystems[1660]: Found loop4 Apr 30 03:28:22.032387 extend-filesystems[1660]: Found loop5 Apr 30 03:28:22.032387 extend-filesystems[1660]: Found loop6 Apr 30 03:28:22.032387 extend-filesystems[1660]: Found loop7 Apr 30 03:28:22.032387 extend-filesystems[1660]: Found sda Apr 30 03:28:22.032387 extend-filesystems[1660]: Found sda1 Apr 30 03:28:22.032387 extend-filesystems[1660]: Found sda2 Apr 30 03:28:22.032387 extend-filesystems[1660]: Found sda3 Apr 30 03:28:22.032387 extend-filesystems[1660]: Found usr Apr 30 03:28:22.032387 extend-filesystems[1660]: Found sda4 Apr 30 03:28:22.032387 extend-filesystems[1660]: Found sda6 Apr 30 03:28:22.032387 extend-filesystems[1660]: Found sda7 Apr 30 03:28:22.032387 extend-filesystems[1660]: Found sda9 Apr 30 03:28:22.032387 extend-filesystems[1660]: Checking size of /dev/sda9 Apr 30 03:28:22.040551 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:28:22.061886 systemd[1]: Started chronyd.service - NTP client/server. Apr 30 03:28:22.092781 jq[1681]: true Apr 30 03:28:22.081668 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:28:22.083288 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:28:22.093829 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:28:22.094581 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:28:22.101979 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:28:22.103255 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:28:22.125662 extend-filesystems[1660]: Old size kept for /dev/sda9 Apr 30 03:28:22.129430 extend-filesystems[1660]: Found sr0 Apr 30 03:28:22.131728 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:28:22.131975 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:28:22.139450 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:28:22.141974 jq[1692]: true Apr 30 03:28:22.148806 (ntainerd)[1694]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:28:22.179572 systemd-logind[1678]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:28:22.184644 systemd-logind[1678]: New seat seat0. Apr 30 03:28:22.186576 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:28:22.186617 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:28:22.196745 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:28:22.198689 update_engine[1680]: I20250430 03:28:22.198241 1680 main.cc:92] Flatcar Update Engine starting Apr 30 03:28:22.196778 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:28:22.207804 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:28:22.214365 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:28:22.222252 update_engine[1680]: I20250430 03:28:22.220456 1680 update_check_scheduler.cc:74] Next update check in 2m57s Apr 30 03:28:22.234554 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:28:22.251283 coreos-metadata[1655]: Apr 30 03:28:22.251 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 03:28:22.252003 tar[1690]: linux-amd64/helm Apr 30 03:28:22.255343 coreos-metadata[1655]: Apr 30 03:28:22.255 INFO Fetch successful Apr 30 03:28:22.257885 coreos-metadata[1655]: Apr 30 03:28:22.257 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 30 03:28:22.263381 coreos-metadata[1655]: Apr 30 03:28:22.263 INFO Fetch successful Apr 30 03:28:22.265238 coreos-metadata[1655]: Apr 30 03:28:22.265 INFO Fetching http://168.63.129.16/machine/58161f29-525b-4637-bcbe-89fcb57acfa8/c922e924%2De975%2D4967%2Dbfac%2Dddfd9f6721a5.%5Fci%2D4081.3.3%2Da%2Dafe39379c7?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 30 03:28:22.268106 coreos-metadata[1655]: Apr 30 03:28:22.267 INFO Fetch successful Apr 30 03:28:22.268254 coreos-metadata[1655]: Apr 30 03:28:22.268 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 30 03:28:22.283828 coreos-metadata[1655]: Apr 30 03:28:22.283 INFO Fetch successful Apr 30 03:28:22.361017 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:28:22.365610 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:28:22.382107 bash[1735]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:28:22.385664 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:28:22.396788 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 03:28:22.441217 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1723) Apr 30 03:28:22.831247 locksmithd[1713]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:28:22.906679 sshd_keygen[1704]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:28:22.947694 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:28:22.959462 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:28:22.968807 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 30 03:28:22.978879 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:28:22.979111 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:28:22.996048 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:28:23.028450 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 30 03:28:23.059381 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:28:23.074136 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:28:23.086561 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:28:23.093048 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:28:23.221647 tar[1690]: linux-amd64/LICENSE Apr 30 03:28:23.221647 tar[1690]: linux-amd64/README.md Apr 30 03:28:23.239253 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:28:23.281209 containerd[1694]: time="2025-04-30T03:28:23.279963600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:28:23.311742 containerd[1694]: time="2025-04-30T03:28:23.311688500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:23.313777 containerd[1694]: time="2025-04-30T03:28:23.313722700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:23.313924 containerd[1694]: time="2025-04-30T03:28:23.313906700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:28:23.314003 containerd[1694]: time="2025-04-30T03:28:23.313988400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:28:23.314288 containerd[1694]: time="2025-04-30T03:28:23.314256100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:28:23.314396 containerd[1694]: time="2025-04-30T03:28:23.314380800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:23.314540 containerd[1694]: time="2025-04-30T03:28:23.314511000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:23.314540 containerd[1694]: time="2025-04-30T03:28:23.314533600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:23.314765 containerd[1694]: time="2025-04-30T03:28:23.314739300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:23.314765 containerd[1694]: time="2025-04-30T03:28:23.314760600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:23.314861 containerd[1694]: time="2025-04-30T03:28:23.314779600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:23.314861 containerd[1694]: time="2025-04-30T03:28:23.314793900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:23.315886 containerd[1694]: time="2025-04-30T03:28:23.315030900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:23.315886 containerd[1694]: time="2025-04-30T03:28:23.315314000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:23.315886 containerd[1694]: time="2025-04-30T03:28:23.315478700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:23.315886 containerd[1694]: time="2025-04-30T03:28:23.315499100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:28:23.315886 containerd[1694]: time="2025-04-30T03:28:23.315588200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:28:23.315886 containerd[1694]: time="2025-04-30T03:28:23.315647400Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:28:23.327682 containerd[1694]: time="2025-04-30T03:28:23.327635900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:28:23.328083 containerd[1694]: time="2025-04-30T03:28:23.327886500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:28:23.328083 containerd[1694]: time="2025-04-30T03:28:23.327984900Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:28:23.328083 containerd[1694]: time="2025-04-30T03:28:23.328031500Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:28:23.328083 containerd[1694]: time="2025-04-30T03:28:23.328053800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:28:23.329208 containerd[1694]: time="2025-04-30T03:28:23.328441700Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:28:23.329208 containerd[1694]: time="2025-04-30T03:28:23.329103700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:28:23.329497 containerd[1694]: time="2025-04-30T03:28:23.329457100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:28:23.329587 containerd[1694]: time="2025-04-30T03:28:23.329573400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:28:23.329701 containerd[1694]: time="2025-04-30T03:28:23.329684400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:28:23.329783 containerd[1694]: time="2025-04-30T03:28:23.329770100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:28:23.329867 containerd[1694]: time="2025-04-30T03:28:23.329854400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:28:23.329944 containerd[1694]: time="2025-04-30T03:28:23.329932200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:28:23.330029 containerd[1694]: time="2025-04-30T03:28:23.330011800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:28:23.330114 containerd[1694]: time="2025-04-30T03:28:23.330096700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:28:23.330253 containerd[1694]: time="2025-04-30T03:28:23.330233500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:28:23.330358 containerd[1694]: time="2025-04-30T03:28:23.330344000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:28:23.330441 containerd[1694]: time="2025-04-30T03:28:23.330428800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:28:23.330567 containerd[1694]: time="2025-04-30T03:28:23.330543500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.330861 containerd[1694]: time="2025-04-30T03:28:23.330829700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.330972 containerd[1694]: time="2025-04-30T03:28:23.330870300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.330972 containerd[1694]: time="2025-04-30T03:28:23.330899900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.330972 containerd[1694]: time="2025-04-30T03:28:23.330922500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.330972 containerd[1694]: time="2025-04-30T03:28:23.330944900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.331135 containerd[1694]: time="2025-04-30T03:28:23.330967600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.331135 containerd[1694]: time="2025-04-30T03:28:23.331001300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.331135 containerd[1694]: time="2025-04-30T03:28:23.331028100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.331135 containerd[1694]: time="2025-04-30T03:28:23.331055500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.331135 containerd[1694]: time="2025-04-30T03:28:23.331081300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.331135 containerd[1694]: time="2025-04-30T03:28:23.331105400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.331135 containerd[1694]: time="2025-04-30T03:28:23.331126200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.331408 containerd[1694]: time="2025-04-30T03:28:23.331155600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:28:23.331408 containerd[1694]: time="2025-04-30T03:28:23.331217200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.331408 containerd[1694]: time="2025-04-30T03:28:23.331238500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.331408 containerd[1694]: time="2025-04-30T03:28:23.331260700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:28:23.331408 containerd[1694]: time="2025-04-30T03:28:23.331331000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:28:23.331408 containerd[1694]: time="2025-04-30T03:28:23.331362100Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:28:23.331408 containerd[1694]: time="2025-04-30T03:28:23.331379700Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:28:23.331692 containerd[1694]: time="2025-04-30T03:28:23.331403700Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:28:23.331692 containerd[1694]: time="2025-04-30T03:28:23.331423400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.331692 containerd[1694]: time="2025-04-30T03:28:23.331446600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:28:23.331692 containerd[1694]: time="2025-04-30T03:28:23.331466400Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:28:23.331692 containerd[1694]: time="2025-04-30T03:28:23.331482200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:28:23.332316 containerd[1694]: time="2025-04-30T03:28:23.331899500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:28:23.332316 containerd[1694]: time="2025-04-30T03:28:23.332106700Z" level=info msg="Connect containerd service" Apr 30 03:28:23.332316 containerd[1694]: time="2025-04-30T03:28:23.332162300Z" level=info msg="using legacy CRI server" Apr 30 03:28:23.332316 containerd[1694]: time="2025-04-30T03:28:23.332172300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:28:23.332710 containerd[1694]: time="2025-04-30T03:28:23.332397600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:28:23.333476 containerd[1694]: time="2025-04-30T03:28:23.333454700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:28:23.334213 containerd[1694]: time="2025-04-30T03:28:23.333847700Z" level=info msg="Start subscribing containerd event" Apr 30 03:28:23.334213 containerd[1694]: time="2025-04-30T03:28:23.333878600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:28:23.334213 containerd[1694]: time="2025-04-30T03:28:23.333913200Z" level=info msg="Start recovering state" Apr 30 03:28:23.334213 containerd[1694]: time="2025-04-30T03:28:23.333950700Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:28:23.334213 containerd[1694]: time="2025-04-30T03:28:23.333987800Z" level=info msg="Start event monitor" Apr 30 03:28:23.334213 containerd[1694]: time="2025-04-30T03:28:23.334006300Z" level=info msg="Start snapshots syncer" Apr 30 03:28:23.334213 containerd[1694]: time="2025-04-30T03:28:23.334019100Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:28:23.334213 containerd[1694]: time="2025-04-30T03:28:23.334032700Z" level=info msg="Start streaming server" Apr 30 03:28:23.334218 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:28:23.338719 containerd[1694]: time="2025-04-30T03:28:23.337809900Z" level=info msg="containerd successfully booted in 0.058646s" Apr 30 03:28:23.782110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:23.786736 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:28:23.787690 (kubelet)[1815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:23.793302 systemd[1]: Startup finished in 782ms (firmware) + 26.193s (loader) + 1.151s (kernel) + 14.957s (initrd) + 10.466s (userspace) = 53.552s. Apr 30 03:28:24.080735 login[1797]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 03:28:24.088298 login[1799]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 03:28:24.094064 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:28:24.101688 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:28:24.105236 systemd-logind[1678]: New session 1 of user core. Apr 30 03:28:24.109245 systemd-logind[1678]: New session 2 of user core. Apr 30 03:28:24.127280 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:28:24.137584 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:28:24.141581 (systemd)[1826]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:28:24.333299 systemd[1826]: Queued start job for default target default.target. Apr 30 03:28:24.339600 systemd[1826]: Created slice app.slice - User Application Slice. Apr 30 03:28:24.339637 systemd[1826]: Reached target paths.target - Paths. Apr 30 03:28:24.339656 systemd[1826]: Reached target timers.target - Timers. Apr 30 03:28:24.343345 systemd[1826]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:28:24.362789 systemd[1826]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:28:24.362938 systemd[1826]: Reached target sockets.target - Sockets. Apr 30 03:28:24.362956 systemd[1826]: Reached target basic.target - Basic System. Apr 30 03:28:24.363290 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:28:24.363661 systemd[1826]: Reached target default.target - Main User Target. Apr 30 03:28:24.363728 systemd[1826]: Startup finished in 210ms. Apr 30 03:28:24.372490 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:28:24.373832 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:28:24.564782 kubelet[1815]: E0430 03:28:24.564713 1815 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:24.566691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:24.567030 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:24.734874 waagent[1795]: 2025-04-30T03:28:24.734764Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 30 03:28:24.738891 waagent[1795]: 2025-04-30T03:28:24.738816Z INFO Daemon Daemon OS: flatcar 4081.3.3 Apr 30 03:28:24.743343 waagent[1795]: 2025-04-30T03:28:24.743246Z INFO Daemon Daemon Python: 3.11.9 Apr 30 03:28:24.746145 waagent[1795]: 2025-04-30T03:28:24.746076Z INFO Daemon Daemon Run daemon Apr 30 03:28:24.748517 waagent[1795]: 2025-04-30T03:28:24.748365Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.3' Apr 30 03:28:24.752847 waagent[1795]: 2025-04-30T03:28:24.752786Z INFO Daemon Daemon Using waagent for provisioning Apr 30 03:28:24.755795 waagent[1795]: 2025-04-30T03:28:24.755745Z INFO Daemon Daemon Activate resource disk Apr 30 03:28:24.789494 waagent[1795]: 2025-04-30T03:28:24.756838Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 30 03:28:24.789494 waagent[1795]: 2025-04-30T03:28:24.761340Z INFO Daemon Daemon Found device: None Apr 30 03:28:24.789494 waagent[1795]: 2025-04-30T03:28:24.762860Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 30 03:28:24.789494 waagent[1795]: 2025-04-30T03:28:24.763771Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 30 03:28:24.789494 waagent[1795]: 2025-04-30T03:28:24.766375Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 03:28:24.789494 waagent[1795]: 2025-04-30T03:28:24.767069Z INFO Daemon Daemon Running default provisioning handler Apr 30 03:28:24.789494 waagent[1795]: 2025-04-30T03:28:24.775571Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 30 03:28:24.789494 waagent[1795]: 2025-04-30T03:28:24.777811Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 30 03:28:24.789494 waagent[1795]: 2025-04-30T03:28:24.778500Z INFO Daemon Daemon cloud-init is enabled: False Apr 30 03:28:24.789494 waagent[1795]: 2025-04-30T03:28:24.779053Z INFO Daemon Daemon Copying ovf-env.xml Apr 30 03:28:24.859925 waagent[1795]: 2025-04-30T03:28:24.856943Z INFO Daemon Daemon Successfully mounted dvd Apr 30 03:28:24.902832 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 30 03:28:24.905680 waagent[1795]: 2025-04-30T03:28:24.905582Z INFO Daemon Daemon Detect protocol endpoint Apr 30 03:28:24.921775 waagent[1795]: 2025-04-30T03:28:24.907134Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 03:28:24.921775 waagent[1795]: 2025-04-30T03:28:24.907673Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 30 03:28:24.921775 waagent[1795]: 2025-04-30T03:28:24.908122Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 30 03:28:24.921775 waagent[1795]: 2025-04-30T03:28:24.909329Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 30 03:28:24.921775 waagent[1795]: 2025-04-30T03:28:24.910170Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 30 03:28:24.933896 waagent[1795]: 2025-04-30T03:28:24.933833Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 30 03:28:24.937086 waagent[1795]: 2025-04-30T03:28:24.937046Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 30 03:28:24.941624 waagent[1795]: 2025-04-30T03:28:24.938000Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 30 03:28:25.045829 waagent[1795]: 2025-04-30T03:28:25.045657Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 30 03:28:25.050998 waagent[1795]: 2025-04-30T03:28:25.046912Z INFO Daemon Daemon Forcing an update of the goal state. Apr 30 03:28:25.051756 waagent[1795]: 2025-04-30T03:28:25.051697Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 03:28:25.068070 waagent[1795]: 2025-04-30T03:28:25.067992Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Apr 30 03:28:25.083344 waagent[1795]: 2025-04-30T03:28:25.069789Z INFO Daemon Apr 30 03:28:25.083344 waagent[1795]: 2025-04-30T03:28:25.071508Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 16541c15-dc1c-445c-8110-d272b0cf214c eTag: 11979280009188351714 source: Fabric] Apr 30 03:28:25.083344 waagent[1795]: 2025-04-30T03:28:25.072892Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 30 03:28:25.083344 waagent[1795]: 2025-04-30T03:28:25.074026Z INFO Daemon Apr 30 03:28:25.083344 waagent[1795]: 2025-04-30T03:28:25.074855Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 30 03:28:25.086723 waagent[1795]: 2025-04-30T03:28:25.086672Z INFO Daemon Daemon Downloading artifacts profile blob Apr 30 03:28:25.220502 waagent[1795]: 2025-04-30T03:28:25.220408Z INFO Daemon Downloaded certificate {'thumbprint': 'FD4875753147655A21B1C59A631C1E1641DF6D70', 'hasPrivateKey': True} Apr 30 03:28:25.232063 waagent[1795]: 2025-04-30T03:28:25.222346Z INFO Daemon Downloaded certificate {'thumbprint': '36BD75E7034B543911E4DC08673EC45A9220C7B8', 'hasPrivateKey': False} Apr 30 03:28:25.232063 waagent[1795]: 2025-04-30T03:28:25.223783Z INFO Daemon Fetch goal state completed Apr 30 03:28:25.259414 waagent[1795]: 2025-04-30T03:28:25.259326Z INFO Daemon Daemon Starting provisioning Apr 30 03:28:25.267723 waagent[1795]: 2025-04-30T03:28:25.261239Z INFO Daemon Daemon Handle ovf-env.xml. Apr 30 03:28:25.267723 waagent[1795]: 2025-04-30T03:28:25.262318Z INFO Daemon Daemon Set hostname [ci-4081.3.3-a-afe39379c7] Apr 30 03:28:25.278872 waagent[1795]: 2025-04-30T03:28:25.278786Z INFO Daemon Daemon Publish hostname [ci-4081.3.3-a-afe39379c7] Apr 30 03:28:25.286945 waagent[1795]: 2025-04-30T03:28:25.280387Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 30 03:28:25.286945 waagent[1795]: 2025-04-30T03:28:25.280831Z INFO Daemon Daemon Primary interface is [eth0] Apr 30 03:28:25.304651 systemd-networkd[1331]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:25.304661 systemd-networkd[1331]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:25.304713 systemd-networkd[1331]: eth0: DHCP lease lost Apr 30 03:28:25.306209 waagent[1795]: 2025-04-30T03:28:25.306080Z INFO Daemon Daemon Create user account if not exists Apr 30 03:28:25.323500 waagent[1795]: 2025-04-30T03:28:25.307953Z INFO Daemon Daemon User core already exists, skip useradd Apr 30 03:28:25.323500 waagent[1795]: 2025-04-30T03:28:25.308773Z INFO Daemon Daemon Configure sudoer Apr 30 03:28:25.323500 waagent[1795]: 2025-04-30T03:28:25.310049Z INFO Daemon Daemon Configure sshd Apr 30 03:28:25.323500 waagent[1795]: 2025-04-30T03:28:25.310851Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 30 03:28:25.323500 waagent[1795]: 2025-04-30T03:28:25.311606Z INFO Daemon Daemon Deploy ssh public key. Apr 30 03:28:25.324307 systemd-networkd[1331]: eth0: DHCPv6 lease lost Apr 30 03:28:25.351264 systemd-networkd[1331]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Apr 30 03:28:26.429316 waagent[1795]: 2025-04-30T03:28:26.429238Z INFO Daemon Daemon Provisioning complete Apr 30 03:28:26.444090 waagent[1795]: 2025-04-30T03:28:26.443996Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 30 03:28:26.451711 waagent[1795]: 2025-04-30T03:28:26.445395Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 30 03:28:26.451711 waagent[1795]: 2025-04-30T03:28:26.445861Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 30 03:28:26.575636 waagent[1885]: 2025-04-30T03:28:26.575522Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 30 03:28:26.576055 waagent[1885]: 2025-04-30T03:28:26.575709Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.3 Apr 30 03:28:26.576055 waagent[1885]: 2025-04-30T03:28:26.575796Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 30 03:28:26.610873 waagent[1885]: 2025-04-30T03:28:26.610762Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 30 03:28:26.611148 waagent[1885]: 2025-04-30T03:28:26.611089Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 03:28:26.611283 waagent[1885]: 2025-04-30T03:28:26.611228Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 03:28:26.620529 waagent[1885]: 2025-04-30T03:28:26.620441Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 03:28:26.626453 waagent[1885]: 2025-04-30T03:28:26.626391Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Apr 30 03:28:26.626989 waagent[1885]: 2025-04-30T03:28:26.626927Z INFO ExtHandler Apr 30 03:28:26.627091 waagent[1885]: 2025-04-30T03:28:26.627032Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 97115e00-612c-4d61-b3aa-eacb40b59971 eTag: 11979280009188351714 source: Fabric] Apr 30 03:28:26.627433 waagent[1885]: 2025-04-30T03:28:26.627382Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 30 03:28:26.627990 waagent[1885]: 2025-04-30T03:28:26.627941Z INFO ExtHandler Apr 30 03:28:26.628080 waagent[1885]: 2025-04-30T03:28:26.628022Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 30 03:28:26.632047 waagent[1885]: 2025-04-30T03:28:26.631996Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 30 03:28:26.720390 waagent[1885]: 2025-04-30T03:28:26.720234Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FD4875753147655A21B1C59A631C1E1641DF6D70', 'hasPrivateKey': True} Apr 30 03:28:26.720816 waagent[1885]: 2025-04-30T03:28:26.720757Z INFO ExtHandler Downloaded certificate {'thumbprint': '36BD75E7034B543911E4DC08673EC45A9220C7B8', 'hasPrivateKey': False} Apr 30 03:28:26.721289 waagent[1885]: 2025-04-30T03:28:26.721237Z INFO ExtHandler Fetch goal state completed Apr 30 03:28:26.736334 waagent[1885]: 2025-04-30T03:28:26.736249Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1885 Apr 30 03:28:26.736520 waagent[1885]: 2025-04-30T03:28:26.736466Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 30 03:28:26.738145 waagent[1885]: 2025-04-30T03:28:26.738078Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.3', '', 'Flatcar Container Linux by Kinvolk'] Apr 30 03:28:26.738659 waagent[1885]: 2025-04-30T03:28:26.738597Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 30 03:28:26.805471 waagent[1885]: 2025-04-30T03:28:26.805411Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 30 03:28:26.805761 waagent[1885]: 2025-04-30T03:28:26.805700Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 30 03:28:26.813991 waagent[1885]: 2025-04-30T03:28:26.813944Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 30 03:28:26.821508 systemd[1]: Reloading requested from client PID 1900 ('systemctl') (unit waagent.service)... Apr 30 03:28:26.821526 systemd[1]: Reloading... Apr 30 03:28:26.926267 zram_generator::config[1937]: No configuration found. Apr 30 03:28:27.045683 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:27.127243 systemd[1]: Reloading finished in 305 ms. Apr 30 03:28:27.157212 waagent[1885]: 2025-04-30T03:28:27.151809Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 30 03:28:27.161551 systemd[1]: Reloading requested from client PID 1991 ('systemctl') (unit waagent.service)... Apr 30 03:28:27.161575 systemd[1]: Reloading... Apr 30 03:28:27.248214 zram_generator::config[2023]: No configuration found. Apr 30 03:28:27.375071 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:27.465515 systemd[1]: Reloading finished in 303 ms. Apr 30 03:28:27.493218 waagent[1885]: 2025-04-30T03:28:27.492624Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 30 03:28:27.493218 waagent[1885]: 2025-04-30T03:28:27.492846Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 30 03:28:28.686104 waagent[1885]: 2025-04-30T03:28:28.685998Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 30 03:28:28.686922 waagent[1885]: 2025-04-30T03:28:28.686848Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 30 03:28:28.687889 waagent[1885]: 2025-04-30T03:28:28.687808Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 30 03:28:28.688452 waagent[1885]: 2025-04-30T03:28:28.688384Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 30 03:28:28.688604 waagent[1885]: 2025-04-30T03:28:28.688554Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 03:28:28.688740 waagent[1885]: 2025-04-30T03:28:28.688689Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 03:28:28.688960 waagent[1885]: 2025-04-30T03:28:28.688907Z INFO EnvHandler ExtHandler Configure routes Apr 30 03:28:28.689085 waagent[1885]: 2025-04-30T03:28:28.689029Z INFO EnvHandler ExtHandler Gateway:None Apr 30 03:28:28.689264 waagent[1885]: 2025-04-30T03:28:28.689133Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 03:28:28.689581 waagent[1885]: 2025-04-30T03:28:28.689520Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 30 03:28:28.689706 waagent[1885]: 2025-04-30T03:28:28.689649Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 03:28:28.689835 waagent[1885]: 2025-04-30T03:28:28.689784Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 30 03:28:28.690047 waagent[1885]: 2025-04-30T03:28:28.689996Z INFO EnvHandler ExtHandler Routes:None Apr 30 03:28:28.691017 waagent[1885]: 2025-04-30T03:28:28.690938Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 30 03:28:28.691353 waagent[1885]: 2025-04-30T03:28:28.691303Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 30 03:28:28.691440 waagent[1885]: 2025-04-30T03:28:28.691359Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 30 03:28:28.692327 waagent[1885]: 2025-04-30T03:28:28.692203Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 30 03:28:28.694550 waagent[1885]: 2025-04-30T03:28:28.694507Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 30 03:28:28.694550 waagent[1885]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 30 03:28:28.694550 waagent[1885]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Apr 30 03:28:28.694550 waagent[1885]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 30 03:28:28.694550 waagent[1885]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 30 03:28:28.694550 waagent[1885]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 03:28:28.694550 waagent[1885]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 03:28:28.704216 waagent[1885]: 2025-04-30T03:28:28.702590Z INFO ExtHandler ExtHandler Apr 30 03:28:28.704216 waagent[1885]: 2025-04-30T03:28:28.702701Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 16ffb320-d5aa-4fdc-8ae5-9f3758d29778 correlation 76635110-9113-42c9-a40f-74b883d20c50 created: 2025-04-30T03:27:18.829094Z] Apr 30 03:28:28.704216 waagent[1885]: 2025-04-30T03:28:28.703219Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 30 03:28:28.704216 waagent[1885]: 2025-04-30T03:28:28.704018Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Apr 30 03:28:28.750382 waagent[1885]: 2025-04-30T03:28:28.750305Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A231681D-32C2-44DB-B0B8-E413B5F172B5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 30 03:28:28.760462 waagent[1885]: 2025-04-30T03:28:28.760389Z INFO MonitorHandler ExtHandler Network interfaces: Apr 30 03:28:28.760462 waagent[1885]: Executing ['ip', '-a', '-o', 'link']: Apr 30 03:28:28.760462 waagent[1885]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 30 03:28:28.760462 waagent[1885]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e1:53:1f brd ff:ff:ff:ff:ff:ff Apr 30 03:28:28.760462 waagent[1885]: 3: enP62842s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e1:53:1f brd ff:ff:ff:ff:ff:ff\ altname enP62842p0s2 Apr 30 03:28:28.760462 waagent[1885]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 30 03:28:28.760462 waagent[1885]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 30 03:28:28.760462 waagent[1885]: 2: eth0 inet 10.200.8.38/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 30 03:28:28.760462 waagent[1885]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 30 03:28:28.760462 waagent[1885]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 30 03:28:28.760462 waagent[1885]: 2: eth0 inet6 fe80::6245:bdff:fee1:531f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 03:28:28.760462 waagent[1885]: 3: enP62842s1 inet6 fe80::6245:bdff:fee1:531f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 03:28:28.838298 waagent[1885]: 2025-04-30T03:28:28.838207Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 30 03:28:28.838298 waagent[1885]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:28.838298 waagent[1885]: pkts bytes target prot opt in out source destination Apr 30 03:28:28.838298 waagent[1885]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:28.838298 waagent[1885]: pkts bytes target prot opt in out source destination Apr 30 03:28:28.838298 waagent[1885]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:28.838298 waagent[1885]: pkts bytes target prot opt in out source destination Apr 30 03:28:28.838298 waagent[1885]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 03:28:28.838298 waagent[1885]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 03:28:28.838298 waagent[1885]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 03:28:28.841732 waagent[1885]: 2025-04-30T03:28:28.841664Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 30 03:28:28.841732 waagent[1885]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:28.841732 waagent[1885]: pkts bytes target prot opt in out source destination Apr 30 03:28:28.841732 waagent[1885]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:28.841732 waagent[1885]: pkts bytes target prot opt in out source destination Apr 30 03:28:28.841732 waagent[1885]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 03:28:28.841732 waagent[1885]: pkts bytes target prot opt in out source destination Apr 30 03:28:28.841732 waagent[1885]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 03:28:28.841732 waagent[1885]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 03:28:28.841732 waagent[1885]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 03:28:28.842145 waagent[1885]: 2025-04-30T03:28:28.842004Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 30 03:28:34.725255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:28:34.730444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:34.828217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:34.837555 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:35.452080 kubelet[2121]: E0430 03:28:35.452001 2121 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:35.455817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:35.456033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:35.727722 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:28:35.732485 systemd[1]: Started sshd@0-10.200.8.38:22-10.200.16.10:52340.service - OpenSSH per-connection server daemon (10.200.16.10:52340). Apr 30 03:28:36.411893 sshd[2129]: Accepted publickey for core from 10.200.16.10 port 52340 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:36.413736 sshd[2129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:36.418492 systemd-logind[1678]: New session 3 of user core. Apr 30 03:28:36.428427 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:28:36.964971 systemd[1]: Started sshd@1-10.200.8.38:22-10.200.16.10:52346.service - OpenSSH per-connection server daemon (10.200.16.10:52346). Apr 30 03:28:37.590473 sshd[2134]: Accepted publickey for core from 10.200.16.10 port 52346 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:37.592257 sshd[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:37.598007 systemd-logind[1678]: New session 4 of user core. Apr 30 03:28:37.611368 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:28:38.036049 sshd[2134]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:38.039064 systemd[1]: sshd@1-10.200.8.38:22-10.200.16.10:52346.service: Deactivated successfully. Apr 30 03:28:38.041044 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:28:38.042609 systemd-logind[1678]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:28:38.043653 systemd-logind[1678]: Removed session 4. Apr 30 03:28:38.148054 systemd[1]: Started sshd@2-10.200.8.38:22-10.200.16.10:52354.service - OpenSSH per-connection server daemon (10.200.16.10:52354). Apr 30 03:28:38.771306 sshd[2141]: Accepted publickey for core from 10.200.16.10 port 52354 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:38.773036 sshd[2141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:38.778443 systemd-logind[1678]: New session 5 of user core. Apr 30 03:28:38.784385 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:28:39.214968 sshd[2141]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:39.219680 systemd[1]: sshd@2-10.200.8.38:22-10.200.16.10:52354.service: Deactivated successfully. Apr 30 03:28:39.221457 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:28:39.222108 systemd-logind[1678]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:28:39.223000 systemd-logind[1678]: Removed session 5. Apr 30 03:28:39.328003 systemd[1]: Started sshd@3-10.200.8.38:22-10.200.16.10:44082.service - OpenSSH per-connection server daemon (10.200.16.10:44082). Apr 30 03:28:39.949423 sshd[2148]: Accepted publickey for core from 10.200.16.10 port 44082 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:39.951256 sshd[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:39.955475 systemd-logind[1678]: New session 6 of user core. Apr 30 03:28:39.962353 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:28:40.395411 sshd[2148]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:40.398251 systemd[1]: sshd@3-10.200.8.38:22-10.200.16.10:44082.service: Deactivated successfully. Apr 30 03:28:40.400223 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:28:40.401783 systemd-logind[1678]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:28:40.402730 systemd-logind[1678]: Removed session 6. Apr 30 03:28:40.506319 systemd[1]: Started sshd@4-10.200.8.38:22-10.200.16.10:44094.service - OpenSSH per-connection server daemon (10.200.16.10:44094). Apr 30 03:28:41.130220 sshd[2155]: Accepted publickey for core from 10.200.16.10 port 44094 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:41.132023 sshd[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:41.136576 systemd-logind[1678]: New session 7 of user core. Apr 30 03:28:41.143364 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:28:41.588728 sudo[2158]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:28:41.589122 sudo[2158]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:41.618641 sudo[2158]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:41.722181 sshd[2155]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:41.725490 systemd[1]: sshd@4-10.200.8.38:22-10.200.16.10:44094.service: Deactivated successfully. Apr 30 03:28:41.727783 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:28:41.729244 systemd-logind[1678]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:28:41.730411 systemd-logind[1678]: Removed session 7. Apr 30 03:28:41.836303 systemd[1]: Started sshd@5-10.200.8.38:22-10.200.16.10:44106.service - OpenSSH per-connection server daemon (10.200.16.10:44106). Apr 30 03:28:42.460003 sshd[2163]: Accepted publickey for core from 10.200.16.10 port 44106 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:42.461842 sshd[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:42.466553 systemd-logind[1678]: New session 8 of user core. Apr 30 03:28:42.477461 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:28:42.805120 sudo[2167]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:28:42.805512 sudo[2167]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:42.808870 sudo[2167]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:42.814025 sudo[2166]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:28:42.814412 sudo[2166]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:42.828542 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:42.830259 auditctl[2170]: No rules Apr 30 03:28:42.830629 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:28:42.830845 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:42.833551 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:42.860101 augenrules[2188]: No rules Apr 30 03:28:42.861612 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:42.863332 sudo[2166]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:42.964124 sshd[2163]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:42.968880 systemd[1]: sshd@5-10.200.8.38:22-10.200.16.10:44106.service: Deactivated successfully. Apr 30 03:28:42.971296 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:28:42.972287 systemd-logind[1678]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:28:42.973441 systemd-logind[1678]: Removed session 8. Apr 30 03:28:43.074347 systemd[1]: Started sshd@6-10.200.8.38:22-10.200.16.10:44114.service - OpenSSH per-connection server daemon (10.200.16.10:44114). Apr 30 03:28:43.697772 sshd[2196]: Accepted publickey for core from 10.200.16.10 port 44114 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:28:43.699594 sshd[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:43.705152 systemd-logind[1678]: New session 9 of user core. Apr 30 03:28:43.714412 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:28:44.042898 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:28:44.043390 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:45.398520 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:28:45.400117 (dockerd)[2214]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:28:45.475148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:28:45.481461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:45.652146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:45.664555 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:45.702514 kubelet[2223]: E0430 03:28:45.702454 2223 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:45.704836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:45.705056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:45.770472 chronyd[1664]: Selected source PHC0 Apr 30 03:28:47.554805 dockerd[2214]: time="2025-04-30T03:28:47.554725523Z" level=info msg="Starting up" Apr 30 03:28:47.963552 dockerd[2214]: time="2025-04-30T03:28:47.963277175Z" level=info msg="Loading containers: start." Apr 30 03:28:48.135298 kernel: Initializing XFRM netlink socket Apr 30 03:28:48.280772 systemd-networkd[1331]: docker0: Link UP Apr 30 03:28:48.302546 dockerd[2214]: time="2025-04-30T03:28:48.302495475Z" level=info msg="Loading containers: done." Apr 30 03:28:48.377057 dockerd[2214]: time="2025-04-30T03:28:48.376981275Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:28:48.377328 dockerd[2214]: time="2025-04-30T03:28:48.377134075Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:28:48.377328 dockerd[2214]: time="2025-04-30T03:28:48.377281075Z" level=info msg="Daemon has completed initialization" Apr 30 03:28:48.421802 dockerd[2214]: time="2025-04-30T03:28:48.421485575Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:28:48.422093 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:28:49.756413 containerd[1694]: time="2025-04-30T03:28:49.756366175Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Apr 30 03:28:50.592644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3091402097.mount: Deactivated successfully. Apr 30 03:28:52.385765 containerd[1694]: time="2025-04-30T03:28:52.385702275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:52.387769 containerd[1694]: time="2025-04-30T03:28:52.387703275Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960995" Apr 30 03:28:52.391511 containerd[1694]: time="2025-04-30T03:28:52.391448675Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:52.395306 containerd[1694]: time="2025-04-30T03:28:52.395269075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:52.396715 containerd[1694]: time="2025-04-30T03:28:52.396258675Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.6398509s" Apr 30 03:28:52.396715 containerd[1694]: time="2025-04-30T03:28:52.396305475Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Apr 30 03:28:52.398059 containerd[1694]: time="2025-04-30T03:28:52.398027375Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Apr 30 03:28:54.208857 containerd[1694]: time="2025-04-30T03:28:54.208797512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:54.211414 containerd[1694]: time="2025-04-30T03:28:54.211343264Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713784" Apr 30 03:28:54.215330 containerd[1694]: time="2025-04-30T03:28:54.215270999Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:54.220652 containerd[1694]: time="2025-04-30T03:28:54.220581717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:54.221759 containerd[1694]: time="2025-04-30T03:28:54.221576877Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.823510801s" Apr 30 03:28:54.221759 containerd[1694]: time="2025-04-30T03:28:54.221621680Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Apr 30 03:28:54.222680 containerd[1694]: time="2025-04-30T03:28:54.222510933Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Apr 30 03:28:55.725512 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 03:28:55.733415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:55.872534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:55.886553 (kubelet)[2435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:56.398849 containerd[1694]: time="2025-04-30T03:28:56.398789227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:56.403168 containerd[1694]: time="2025-04-30T03:28:56.401705963Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780394" Apr 30 03:28:56.403310 kubelet[2435]: E0430 03:28:56.403109 2435 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:56.404702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:56.404876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:56.408107 containerd[1694]: time="2025-04-30T03:28:56.408035542Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:56.415619 containerd[1694]: time="2025-04-30T03:28:56.415549236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:56.416780 containerd[1694]: time="2025-04-30T03:28:56.416637850Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.194089516s" Apr 30 03:28:56.416780 containerd[1694]: time="2025-04-30T03:28:56.416677850Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Apr 30 03:28:56.417712 containerd[1694]: time="2025-04-30T03:28:56.417667763Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Apr 30 03:28:57.476905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount344008520.mount: Deactivated successfully. Apr 30 03:28:57.968532 containerd[1694]: time="2025-04-30T03:28:57.968394027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:57.970371 containerd[1694]: time="2025-04-30T03:28:57.970309951Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" Apr 30 03:28:57.974002 containerd[1694]: time="2025-04-30T03:28:57.973941296Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:57.978678 containerd[1694]: time="2025-04-30T03:28:57.978614255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:57.979515 containerd[1694]: time="2025-04-30T03:28:57.979155662Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.561430498s" Apr 30 03:28:57.979515 containerd[1694]: time="2025-04-30T03:28:57.979214862Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Apr 30 03:28:57.980033 containerd[1694]: time="2025-04-30T03:28:57.979832770Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:28:58.493974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573123999.mount: Deactivated successfully. Apr 30 03:28:59.544436 containerd[1694]: time="2025-04-30T03:28:59.544375807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:59.547088 containerd[1694]: time="2025-04-30T03:28:59.547020340Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Apr 30 03:28:59.550395 containerd[1694]: time="2025-04-30T03:28:59.550339582Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:59.555206 containerd[1694]: time="2025-04-30T03:28:59.555131141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:59.556279 containerd[1694]: time="2025-04-30T03:28:59.556100453Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.576233483s" Apr 30 03:28:59.556279 containerd[1694]: time="2025-04-30T03:28:59.556144554Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:28:59.556996 containerd[1694]: time="2025-04-30T03:28:59.556958864Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 03:29:00.046986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2630855558.mount: Deactivated successfully. Apr 30 03:29:00.070000 containerd[1694]: time="2025-04-30T03:29:00.069944270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:00.072484 containerd[1694]: time="2025-04-30T03:29:00.072419301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Apr 30 03:29:00.077522 containerd[1694]: time="2025-04-30T03:29:00.077465264Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:00.082457 containerd[1694]: time="2025-04-30T03:29:00.082401626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:00.083115 containerd[1694]: time="2025-04-30T03:29:00.083078434Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 526.080369ms" Apr 30 03:29:00.083226 containerd[1694]: time="2025-04-30T03:29:00.083119535Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 03:29:00.083933 containerd[1694]: time="2025-04-30T03:29:00.083875144Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Apr 30 03:29:00.604527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2115044093.mount: Deactivated successfully. Apr 30 03:29:03.000915 containerd[1694]: time="2025-04-30T03:29:03.000847016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:03.002682 containerd[1694]: time="2025-04-30T03:29:03.002615661Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Apr 30 03:29:03.005176 containerd[1694]: time="2025-04-30T03:29:03.005123024Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:03.011739 containerd[1694]: time="2025-04-30T03:29:03.011685489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:03.013178 containerd[1694]: time="2025-04-30T03:29:03.012838418Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.928917574s" Apr 30 03:29:03.013178 containerd[1694]: time="2025-04-30T03:29:03.012880519Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Apr 30 03:29:05.954959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:05.967503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:06.007301 systemd[1]: Reloading requested from client PID 2581 ('systemctl') (unit session-9.scope)... Apr 30 03:29:06.007321 systemd[1]: Reloading... Apr 30 03:29:06.134225 zram_generator::config[2621]: No configuration found. Apr 30 03:29:06.255948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:29:06.336849 systemd[1]: Reloading finished in 328 ms. Apr 30 03:29:06.392781 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:06.395679 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:29:06.395902 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:06.402433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:06.539000 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Apr 30 03:29:06.628485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:06.645574 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:29:07.222057 update_engine[1680]: I20250430 03:29:07.221905 1680 update_attempter.cc:509] Updating boot flags... Apr 30 03:29:07.342253 kubelet[2693]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:07.342253 kubelet[2693]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:29:07.342253 kubelet[2693]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:07.342253 kubelet[2693]: I0430 03:29:07.341030 2693 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:29:07.358209 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2711) Apr 30 03:29:07.752451 kubelet[2693]: I0430 03:29:07.752221 2693 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 03:29:07.752451 kubelet[2693]: I0430 03:29:07.752260 2693 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:29:07.752655 kubelet[2693]: I0430 03:29:07.752626 2693 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 03:29:07.777353 kubelet[2693]: I0430 03:29:07.777002 2693 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:29:07.777353 kubelet[2693]: E0430 03:29:07.777306 2693 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:07.784258 kubelet[2693]: E0430 03:29:07.784222 2693 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:29:07.784258 kubelet[2693]: I0430 03:29:07.784252 2693 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:29:07.790354 kubelet[2693]: I0430 03:29:07.790329 2693 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:29:07.791584 kubelet[2693]: I0430 03:29:07.791559 2693 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 03:29:07.791771 kubelet[2693]: I0430 03:29:07.791730 2693 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:29:07.791950 kubelet[2693]: I0430 03:29:07.791765 2693 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-afe39379c7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:29:07.792097 kubelet[2693]: I0430 03:29:07.791976 2693 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:29:07.792097 kubelet[2693]: I0430 03:29:07.791994 2693 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 03:29:07.792170 kubelet[2693]: I0430 03:29:07.792139 2693 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:07.794716 kubelet[2693]: I0430 03:29:07.794488 2693 kubelet.go:408] "Attempting to sync node with API server" Apr 30 03:29:07.794716 kubelet[2693]: I0430 03:29:07.794516 2693 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:29:07.794716 kubelet[2693]: I0430 03:29:07.794554 2693 kubelet.go:314] "Adding apiserver pod source" Apr 30 03:29:07.794716 kubelet[2693]: I0430 03:29:07.794571 2693 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:29:07.799587 kubelet[2693]: W0430 03:29:07.798347 2693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-afe39379c7&limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Apr 30 03:29:07.799587 kubelet[2693]: E0430 03:29:07.798418 2693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-afe39379c7&limit=500&resourceVersion=0\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:07.799587 kubelet[2693]: W0430 03:29:07.799428 2693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Apr 30 03:29:07.799587 kubelet[2693]: E0430 03:29:07.799479 2693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:07.800237 kubelet[2693]: I0430 03:29:07.800220 2693 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:29:07.802308 kubelet[2693]: I0430 03:29:07.802281 2693 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:29:07.803320 kubelet[2693]: W0430 03:29:07.802775 2693 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:29:07.805677 kubelet[2693]: I0430 03:29:07.805341 2693 server.go:1269] "Started kubelet" Apr 30 03:29:07.815207 kubelet[2693]: I0430 03:29:07.813857 2693 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:29:07.815207 kubelet[2693]: E0430 03:29:07.811660 2693 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-a-afe39379c7.183afaf7cf8a7c51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-afe39379c7,UID:ci-4081.3.3-a-afe39379c7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-afe39379c7,},FirstTimestamp:2025-04-30 03:29:07.805314129 +0000 UTC m=+1.156415189,LastTimestamp:2025-04-30 03:29:07.805314129 +0000 UTC m=+1.156415189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-afe39379c7,}" Apr 30 03:29:07.818168 kubelet[2693]: I0430 03:29:07.818153 2693 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 03:29:07.818878 kubelet[2693]: I0430 03:29:07.818843 2693 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:29:07.819081 kubelet[2693]: I0430 03:29:07.819066 2693 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 03:29:07.819258 kubelet[2693]: I0430 03:29:07.819245 2693 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:29:07.820119 kubelet[2693]: I0430 03:29:07.820067 2693 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:29:07.820561 kubelet[2693]: I0430 03:29:07.820539 2693 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:29:07.820848 kubelet[2693]: I0430 03:29:07.820823 2693 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:29:07.821453 kubelet[2693]: I0430 03:29:07.821430 2693 server.go:460] "Adding debug handlers to kubelet server" Apr 30 03:29:07.822776 kubelet[2693]: E0430 03:29:07.822748 2693 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-afe39379c7\" not found" Apr 30 03:29:07.823306 kubelet[2693]: W0430 03:29:07.823252 2693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Apr 30 03:29:07.823384 kubelet[2693]: E0430 03:29:07.823322 2693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:07.823432 kubelet[2693]: E0430 03:29:07.823400 2693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-afe39379c7?timeout=10s\": dial tcp 10.200.8.38:6443: connect: connection refused" interval="200ms" Apr 30 03:29:07.824298 kubelet[2693]: I0430 03:29:07.824277 2693 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:29:07.824391 kubelet[2693]: I0430 03:29:07.824370 2693 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:29:07.825351 kubelet[2693]: E0430 03:29:07.825330 2693 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:29:07.826129 kubelet[2693]: I0430 03:29:07.826109 2693 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:29:07.837302 kubelet[2693]: I0430 03:29:07.837257 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:29:07.838309 kubelet[2693]: I0430 03:29:07.838272 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:29:07.838309 kubelet[2693]: I0430 03:29:07.838304 2693 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:29:07.838439 kubelet[2693]: I0430 03:29:07.838337 2693 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 03:29:07.838439 kubelet[2693]: E0430 03:29:07.838384 2693 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:29:07.845644 kubelet[2693]: W0430 03:29:07.845578 2693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Apr 30 03:29:07.845778 kubelet[2693]: E0430 03:29:07.845661 2693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:07.885521 kubelet[2693]: I0430 03:29:07.885472 2693 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:29:07.885521 kubelet[2693]: I0430 03:29:07.885511 2693 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:29:07.885830 kubelet[2693]: I0430 03:29:07.885537 2693 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:07.889765 kubelet[2693]: I0430 03:29:07.889738 2693 policy_none.go:49] "None policy: Start" Apr 30 03:29:07.890433 kubelet[2693]: I0430 03:29:07.890410 2693 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:29:07.890539 kubelet[2693]: I0430 03:29:07.890440 2693 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:29:07.900454 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:29:07.909142 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:29:07.922688 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:29:07.923134 kubelet[2693]: E0430 03:29:07.922858 2693 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-afe39379c7\" not found" Apr 30 03:29:07.924425 kubelet[2693]: I0430 03:29:07.924401 2693 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:29:07.924648 kubelet[2693]: I0430 03:29:07.924630 2693 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:29:07.924728 kubelet[2693]: I0430 03:29:07.924653 2693 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:29:07.925829 kubelet[2693]: I0430 03:29:07.925215 2693 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:29:07.927224 kubelet[2693]: E0430 03:29:07.927012 2693 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-a-afe39379c7\" not found" Apr 30 03:29:07.948641 systemd[1]: Created slice kubepods-burstable-podc0acd70f93936c93a75ccdccb05693f7.slice - libcontainer container kubepods-burstable-podc0acd70f93936c93a75ccdccb05693f7.slice. Apr 30 03:29:07.962886 systemd[1]: Created slice kubepods-burstable-pod4d80e552dc59a9fe50d1b6e16b0de8f8.slice - libcontainer container kubepods-burstable-pod4d80e552dc59a9fe50d1b6e16b0de8f8.slice. Apr 30 03:29:07.968927 systemd[1]: Created slice kubepods-burstable-pod09a7c36e517bc163f5cae2770f155929.slice - libcontainer container kubepods-burstable-pod09a7c36e517bc163f5cae2770f155929.slice. Apr 30 03:29:08.024404 kubelet[2693]: E0430 03:29:08.024265 2693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-afe39379c7?timeout=10s\": dial tcp 10.200.8.38:6443: connect: connection refused" interval="400ms" Apr 30 03:29:08.027889 kubelet[2693]: I0430 03:29:08.027854 2693 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.028329 kubelet[2693]: E0430 03:29:08.028239 2693 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.120973 kubelet[2693]: I0430 03:29:08.120855 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d80e552dc59a9fe50d1b6e16b0de8f8-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-afe39379c7\" (UID: \"4d80e552dc59a9fe50d1b6e16b0de8f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.120973 kubelet[2693]: I0430 03:29:08.120917 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d80e552dc59a9fe50d1b6e16b0de8f8-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-afe39379c7\" (UID: \"4d80e552dc59a9fe50d1b6e16b0de8f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.120973 kubelet[2693]: I0430 03:29:08.120951 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d80e552dc59a9fe50d1b6e16b0de8f8-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-afe39379c7\" (UID: \"4d80e552dc59a9fe50d1b6e16b0de8f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.120973 kubelet[2693]: I0430 03:29:08.120976 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0acd70f93936c93a75ccdccb05693f7-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-afe39379c7\" (UID: \"c0acd70f93936c93a75ccdccb05693f7\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.121421 kubelet[2693]: I0430 03:29:08.121001 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0acd70f93936c93a75ccdccb05693f7-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-afe39379c7\" (UID: \"c0acd70f93936c93a75ccdccb05693f7\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.121601 kubelet[2693]: I0430 03:29:08.121029 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d80e552dc59a9fe50d1b6e16b0de8f8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-afe39379c7\" (UID: \"4d80e552dc59a9fe50d1b6e16b0de8f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.121731 kubelet[2693]: I0430 03:29:08.121639 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09a7c36e517bc163f5cae2770f155929-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-afe39379c7\" (UID: \"09a7c36e517bc163f5cae2770f155929\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.121731 kubelet[2693]: I0430 03:29:08.121676 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0acd70f93936c93a75ccdccb05693f7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-afe39379c7\" (UID: \"c0acd70f93936c93a75ccdccb05693f7\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.121731 kubelet[2693]: I0430 03:29:08.121704 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4d80e552dc59a9fe50d1b6e16b0de8f8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-afe39379c7\" (UID: \"4d80e552dc59a9fe50d1b6e16b0de8f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.230341 kubelet[2693]: I0430 03:29:08.230311 2693 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.230675 kubelet[2693]: E0430 03:29:08.230646 2693 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.261732 containerd[1694]: time="2025-04-30T03:29:08.261685199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-afe39379c7,Uid:c0acd70f93936c93a75ccdccb05693f7,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:08.267372 containerd[1694]: time="2025-04-30T03:29:08.267334776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-afe39379c7,Uid:4d80e552dc59a9fe50d1b6e16b0de8f8,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:08.272151 containerd[1694]: time="2025-04-30T03:29:08.272096842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-afe39379c7,Uid:09a7c36e517bc163f5cae2770f155929,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:08.425120 kubelet[2693]: E0430 03:29:08.424938 2693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-afe39379c7?timeout=10s\": dial tcp 10.200.8.38:6443: connect: connection refused" interval="800ms" Apr 30 03:29:08.632817 kubelet[2693]: I0430 03:29:08.632786 2693 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.633202 kubelet[2693]: E0430 03:29:08.633157 2693 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:08.689700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4270689141.mount: Deactivated successfully. Apr 30 03:29:08.720320 containerd[1694]: time="2025-04-30T03:29:08.720263399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:08.723634 containerd[1694]: time="2025-04-30T03:29:08.723579044Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 30 03:29:08.726716 containerd[1694]: time="2025-04-30T03:29:08.726674587Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:08.730122 containerd[1694]: time="2025-04-30T03:29:08.730086034Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:08.732912 containerd[1694]: time="2025-04-30T03:29:08.732857872Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:29:08.738088 containerd[1694]: time="2025-04-30T03:29:08.738047843Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:08.739827 containerd[1694]: time="2025-04-30T03:29:08.739750166Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:29:08.744314 containerd[1694]: time="2025-04-30T03:29:08.744259528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:08.745520 containerd[1694]: time="2025-04-30T03:29:08.744971038Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 472.541292ms" Apr 30 03:29:08.746481 containerd[1694]: time="2025-04-30T03:29:08.746449659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 484.680958ms" Apr 30 03:29:08.749480 containerd[1694]: time="2025-04-30T03:29:08.749449500Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 482.015723ms" Apr 30 03:29:08.772240 kubelet[2693]: W0430 03:29:08.772154 2693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Apr 30 03:29:08.772375 kubelet[2693]: E0430 03:29:08.772252 2693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:08.838122 kubelet[2693]: W0430 03:29:08.838033 2693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Apr 30 03:29:08.838122 kubelet[2693]: E0430 03:29:08.838090 2693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:08.946208 kubelet[2693]: W0430 03:29:08.946041 2693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Apr 30 03:29:08.946208 kubelet[2693]: E0430 03:29:08.946099 2693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:09.187484 kubelet[2693]: W0430 03:29:09.187413 2693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-afe39379c7&limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Apr 30 03:29:09.187627 kubelet[2693]: E0430 03:29:09.187497 2693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-afe39379c7&limit=500&resourceVersion=0\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:29:09.226472 kubelet[2693]: E0430 03:29:09.226418 2693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-afe39379c7?timeout=10s\": dial tcp 10.200.8.38:6443: connect: connection refused" interval="1.6s" Apr 30 03:29:09.385869 containerd[1694]: time="2025-04-30T03:29:09.385347236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:09.385869 containerd[1694]: time="2025-04-30T03:29:09.385548839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:09.385869 containerd[1694]: time="2025-04-30T03:29:09.385608740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:09.386565 containerd[1694]: time="2025-04-30T03:29:09.386513452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:09.393402 containerd[1694]: time="2025-04-30T03:29:09.391517821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:09.393402 containerd[1694]: time="2025-04-30T03:29:09.391569322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:09.393402 containerd[1694]: time="2025-04-30T03:29:09.391604722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:09.393402 containerd[1694]: time="2025-04-30T03:29:09.391700723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:09.405098 containerd[1694]: time="2025-04-30T03:29:09.404281696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:09.406879 containerd[1694]: time="2025-04-30T03:29:09.406544927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:09.406879 containerd[1694]: time="2025-04-30T03:29:09.406615528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:09.409326 containerd[1694]: time="2025-04-30T03:29:09.408298351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:09.437383 systemd[1]: Started cri-containerd-83a2a1b3ea754f73cf8e1cbad8e5f6e381fd5dba099c4ebe751fa51531b82072.scope - libcontainer container 83a2a1b3ea754f73cf8e1cbad8e5f6e381fd5dba099c4ebe751fa51531b82072. Apr 30 03:29:09.443939 kubelet[2693]: I0430 03:29:09.443375 2693 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:09.443939 kubelet[2693]: E0430 03:29:09.443743 2693 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:09.447846 systemd[1]: Started cri-containerd-7d8bccb9227879a134dbcb4d1cce0f2b698ba4f2f074e7745c36738415bc3461.scope - libcontainer container 7d8bccb9227879a134dbcb4d1cce0f2b698ba4f2f074e7745c36738415bc3461. Apr 30 03:29:09.450163 systemd[1]: Started cri-containerd-949fda4e59d76b43816b1b73427e006e7781280d9d8afd6e409ab99d6c0ece3d.scope - libcontainer container 949fda4e59d76b43816b1b73427e006e7781280d9d8afd6e409ab99d6c0ece3d. Apr 30 03:29:09.519888 containerd[1694]: time="2025-04-30T03:29:09.519759483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-afe39379c7,Uid:09a7c36e517bc163f5cae2770f155929,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d8bccb9227879a134dbcb4d1cce0f2b698ba4f2f074e7745c36738415bc3461\"" Apr 30 03:29:09.526005 containerd[1694]: time="2025-04-30T03:29:09.525848366Z" level=info msg="CreateContainer within sandbox \"7d8bccb9227879a134dbcb4d1cce0f2b698ba4f2f074e7745c36738415bc3461\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:29:09.537589 containerd[1694]: time="2025-04-30T03:29:09.537334224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-afe39379c7,Uid:c0acd70f93936c93a75ccdccb05693f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"949fda4e59d76b43816b1b73427e006e7781280d9d8afd6e409ab99d6c0ece3d\"" Apr 30 03:29:09.544028 containerd[1694]: time="2025-04-30T03:29:09.543855814Z" level=info msg="CreateContainer within sandbox \"949fda4e59d76b43816b1b73427e006e7781280d9d8afd6e409ab99d6c0ece3d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:29:09.545157 containerd[1694]: time="2025-04-30T03:29:09.544508223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-afe39379c7,Uid:4d80e552dc59a9fe50d1b6e16b0de8f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"83a2a1b3ea754f73cf8e1cbad8e5f6e381fd5dba099c4ebe751fa51531b82072\"" Apr 30 03:29:09.548424 containerd[1694]: time="2025-04-30T03:29:09.548400676Z" level=info msg="CreateContainer within sandbox \"83a2a1b3ea754f73cf8e1cbad8e5f6e381fd5dba099c4ebe751fa51531b82072\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:29:09.597342 containerd[1694]: time="2025-04-30T03:29:09.597288248Z" level=info msg="CreateContainer within sandbox \"949fda4e59d76b43816b1b73427e006e7781280d9d8afd6e409ab99d6c0ece3d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1591ff145edd29ae6899690a6af337c120135e51da6d50a625e16eba767eea2f\"" Apr 30 03:29:09.600645 containerd[1694]: time="2025-04-30T03:29:09.600601993Z" level=info msg="CreateContainer within sandbox \"7d8bccb9227879a134dbcb4d1cce0f2b698ba4f2f074e7745c36738415bc3461\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e30c4c5d4e034b399ee7d9bd89ec877d4fb32a5e03495b3d6daef78e9f75418f\"" Apr 30 03:29:09.600979 containerd[1694]: time="2025-04-30T03:29:09.600875597Z" level=info msg="StartContainer for \"1591ff145edd29ae6899690a6af337c120135e51da6d50a625e16eba767eea2f\"" Apr 30 03:29:09.604827 containerd[1694]: time="2025-04-30T03:29:09.604653949Z" level=info msg="StartContainer for \"e30c4c5d4e034b399ee7d9bd89ec877d4fb32a5e03495b3d6daef78e9f75418f\"" Apr 30 03:29:09.608629 containerd[1694]: time="2025-04-30T03:29:09.608525502Z" level=info msg="CreateContainer within sandbox \"83a2a1b3ea754f73cf8e1cbad8e5f6e381fd5dba099c4ebe751fa51531b82072\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ca9c8e5fdb998ac421d56b9a698b0884397cf398b79cd678620c0238dbab4213\"" Apr 30 03:29:09.610226 containerd[1694]: time="2025-04-30T03:29:09.609200811Z" level=info msg="StartContainer for \"ca9c8e5fdb998ac421d56b9a698b0884397cf398b79cd678620c0238dbab4213\"" Apr 30 03:29:09.642765 systemd[1]: Started cri-containerd-1591ff145edd29ae6899690a6af337c120135e51da6d50a625e16eba767eea2f.scope - libcontainer container 1591ff145edd29ae6899690a6af337c120135e51da6d50a625e16eba767eea2f. Apr 30 03:29:09.654493 systemd[1]: Started cri-containerd-e30c4c5d4e034b399ee7d9bd89ec877d4fb32a5e03495b3d6daef78e9f75418f.scope - libcontainer container e30c4c5d4e034b399ee7d9bd89ec877d4fb32a5e03495b3d6daef78e9f75418f. Apr 30 03:29:09.664763 systemd[1]: Started cri-containerd-ca9c8e5fdb998ac421d56b9a698b0884397cf398b79cd678620c0238dbab4213.scope - libcontainer container ca9c8e5fdb998ac421d56b9a698b0884397cf398b79cd678620c0238dbab4213. Apr 30 03:29:09.752608 containerd[1694]: time="2025-04-30T03:29:09.752560981Z" level=info msg="StartContainer for \"1591ff145edd29ae6899690a6af337c120135e51da6d50a625e16eba767eea2f\" returns successfully" Apr 30 03:29:09.776978 containerd[1694]: time="2025-04-30T03:29:09.772877560Z" level=info msg="StartContainer for \"e30c4c5d4e034b399ee7d9bd89ec877d4fb32a5e03495b3d6daef78e9f75418f\" returns successfully" Apr 30 03:29:09.784050 containerd[1694]: time="2025-04-30T03:29:09.784007513Z" level=info msg="StartContainer for \"ca9c8e5fdb998ac421d56b9a698b0884397cf398b79cd678620c0238dbab4213\" returns successfully" Apr 30 03:29:11.047246 kubelet[2693]: I0430 03:29:11.046458 2693 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:11.648086 kubelet[2693]: E0430 03:29:11.648023 2693 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-a-afe39379c7\" not found" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:11.675884 kubelet[2693]: I0430 03:29:11.675840 2693 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:11.675884 kubelet[2693]: E0430 03:29:11.675887 2693 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.3-a-afe39379c7\": node \"ci-4081.3.3-a-afe39379c7\" not found" Apr 30 03:29:11.709610 kubelet[2693]: E0430 03:29:11.709563 2693 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-afe39379c7\" not found" Apr 30 03:29:11.801367 kubelet[2693]: I0430 03:29:11.801317 2693 apiserver.go:52] "Watching apiserver" Apr 30 03:29:11.819790 kubelet[2693]: I0430 03:29:11.819659 2693 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 03:29:11.820259 kubelet[2693]: E0430 03:29:11.820071 2693 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.3-a-afe39379c7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:13.744456 systemd[1]: Reloading requested from client PID 3006 ('systemctl') (unit session-9.scope)... Apr 30 03:29:13.744472 systemd[1]: Reloading... Apr 30 03:29:13.845217 zram_generator::config[3045]: No configuration found. Apr 30 03:29:13.975460 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:29:14.072335 systemd[1]: Reloading finished in 327 ms. Apr 30 03:29:14.114819 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:14.134690 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:29:14.134954 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:14.141515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:14.238743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:14.251583 (kubelet)[3113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:29:14.298041 kubelet[3113]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:14.298041 kubelet[3113]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:29:14.298041 kubelet[3113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:14.298533 kubelet[3113]: I0430 03:29:14.298104 3113 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:29:14.309014 kubelet[3113]: I0430 03:29:14.308979 3113 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 03:29:14.309014 kubelet[3113]: I0430 03:29:14.309004 3113 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:29:14.309379 kubelet[3113]: I0430 03:29:14.309281 3113 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 03:29:14.311147 kubelet[3113]: I0430 03:29:14.311118 3113 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:29:14.313092 kubelet[3113]: I0430 03:29:14.312946 3113 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:29:14.316232 kubelet[3113]: E0430 03:29:14.316176 3113 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:29:14.316232 kubelet[3113]: I0430 03:29:14.316223 3113 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:29:14.319178 kubelet[3113]: I0430 03:29:14.319144 3113 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:29:14.319365 kubelet[3113]: I0430 03:29:14.319271 3113 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 03:29:14.319455 kubelet[3113]: I0430 03:29:14.319422 3113 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:29:14.319627 kubelet[3113]: I0430 03:29:14.319451 3113 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-afe39379c7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:29:14.319750 kubelet[3113]: I0430 03:29:14.319629 3113 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:29:14.319750 kubelet[3113]: I0430 03:29:14.319642 3113 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 03:29:14.319750 kubelet[3113]: I0430 03:29:14.319678 3113 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:14.319871 kubelet[3113]: I0430 03:29:14.319792 3113 kubelet.go:408] "Attempting to sync node with API server" Apr 30 03:29:14.319871 kubelet[3113]: I0430 03:29:14.319806 3113 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:29:14.319871 kubelet[3113]: I0430 03:29:14.319839 3113 kubelet.go:314] "Adding apiserver pod source" Apr 30 03:29:14.319871 kubelet[3113]: I0430 03:29:14.319856 3113 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:29:14.322160 kubelet[3113]: I0430 03:29:14.322139 3113 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:29:14.324434 kubelet[3113]: I0430 03:29:14.322876 3113 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:29:14.324434 kubelet[3113]: I0430 03:29:14.324078 3113 server.go:1269] "Started kubelet" Apr 30 03:29:14.338111 kubelet[3113]: I0430 03:29:14.338048 3113 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:29:14.342209 kubelet[3113]: I0430 03:29:14.342135 3113 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:29:14.344206 kubelet[3113]: I0430 03:29:14.343714 3113 server.go:460] "Adding debug handlers to kubelet server" Apr 30 03:29:14.345145 kubelet[3113]: I0430 03:29:14.345097 3113 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:29:14.345726 kubelet[3113]: I0430 03:29:14.345710 3113 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 03:29:14.347746 kubelet[3113]: I0430 03:29:14.347725 3113 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:29:14.348004 kubelet[3113]: I0430 03:29:14.347981 3113 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:29:14.351388 kubelet[3113]: I0430 03:29:14.351369 3113 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 03:29:14.351676 kubelet[3113]: I0430 03:29:14.351659 3113 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:29:14.353959 kubelet[3113]: I0430 03:29:14.353255 3113 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:29:14.353959 kubelet[3113]: I0430 03:29:14.353362 3113 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:29:14.358092 kubelet[3113]: I0430 03:29:14.358048 3113 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:29:14.359394 kubelet[3113]: I0430 03:29:14.359372 3113 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:29:14.359473 kubelet[3113]: I0430 03:29:14.359408 3113 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:29:14.359473 kubelet[3113]: I0430 03:29:14.359426 3113 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 03:29:14.359560 kubelet[3113]: E0430 03:29:14.359469 3113 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:29:14.362420 kubelet[3113]: I0430 03:29:14.362401 3113 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:29:14.414401 kubelet[3113]: I0430 03:29:14.414374 3113 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:29:14.414556 kubelet[3113]: I0430 03:29:14.414423 3113 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:29:14.414556 kubelet[3113]: I0430 03:29:14.414446 3113 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:14.414679 kubelet[3113]: I0430 03:29:14.414658 3113 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:29:14.414739 kubelet[3113]: I0430 03:29:14.414676 3113 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:29:14.414739 kubelet[3113]: I0430 03:29:14.414701 3113 policy_none.go:49] "None policy: Start" Apr 30 03:29:14.415406 kubelet[3113]: I0430 03:29:14.415381 3113 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:29:14.415406 kubelet[3113]: I0430 03:29:14.415405 3113 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:29:14.415617 kubelet[3113]: I0430 03:29:14.415598 3113 state_mem.go:75] "Updated machine memory state" Apr 30 03:29:14.419535 kubelet[3113]: I0430 03:29:14.419505 3113 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:29:14.420031 kubelet[3113]: I0430 03:29:14.419679 3113 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:29:14.420031 kubelet[3113]: I0430 03:29:14.419696 3113 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:29:14.420031 kubelet[3113]: I0430 03:29:14.419926 3113 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:29:14.471507 kubelet[3113]: W0430 03:29:14.471454 3113 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:14.475480 kubelet[3113]: W0430 03:29:14.475350 3113 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:14.475480 kubelet[3113]: W0430 03:29:14.475391 3113 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:14.529746 kubelet[3113]: I0430 03:29:14.529579 3113 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:14.544775 kubelet[3113]: I0430 03:29:14.544727 3113 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:14.544954 kubelet[3113]: I0430 03:29:14.544833 3113 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:14.553426 kubelet[3113]: I0430 03:29:14.553393 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09a7c36e517bc163f5cae2770f155929-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-afe39379c7\" (UID: \"09a7c36e517bc163f5cae2770f155929\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:14.553426 kubelet[3113]: I0430 03:29:14.553430 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0acd70f93936c93a75ccdccb05693f7-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-afe39379c7\" (UID: \"c0acd70f93936c93a75ccdccb05693f7\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:14.553984 kubelet[3113]: I0430 03:29:14.553456 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4d80e552dc59a9fe50d1b6e16b0de8f8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-afe39379c7\" (UID: \"4d80e552dc59a9fe50d1b6e16b0de8f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:14.553984 kubelet[3113]: I0430 03:29:14.553479 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d80e552dc59a9fe50d1b6e16b0de8f8-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-afe39379c7\" (UID: \"4d80e552dc59a9fe50d1b6e16b0de8f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:14.553984 kubelet[3113]: I0430 03:29:14.553566 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d80e552dc59a9fe50d1b6e16b0de8f8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-afe39379c7\" (UID: \"4d80e552dc59a9fe50d1b6e16b0de8f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:14.553984 kubelet[3113]: I0430 03:29:14.553598 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0acd70f93936c93a75ccdccb05693f7-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-afe39379c7\" (UID: \"c0acd70f93936c93a75ccdccb05693f7\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:14.553984 kubelet[3113]: I0430 03:29:14.553637 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0acd70f93936c93a75ccdccb05693f7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-afe39379c7\" (UID: \"c0acd70f93936c93a75ccdccb05693f7\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:14.554454 kubelet[3113]: I0430 03:29:14.553660 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d80e552dc59a9fe50d1b6e16b0de8f8-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-afe39379c7\" (UID: \"4d80e552dc59a9fe50d1b6e16b0de8f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:14.554454 kubelet[3113]: I0430 03:29:14.553682 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d80e552dc59a9fe50d1b6e16b0de8f8-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-afe39379c7\" (UID: \"4d80e552dc59a9fe50d1b6e16b0de8f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:15.320654 kubelet[3113]: I0430 03:29:15.320557 3113 apiserver.go:52] "Watching apiserver" Apr 30 03:29:15.351750 kubelet[3113]: I0430 03:29:15.351671 3113 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 03:29:15.403838 kubelet[3113]: W0430 03:29:15.403792 3113 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:15.404097 kubelet[3113]: E0430 03:29:15.403938 3113 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-a-afe39379c7\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-a-afe39379c7" Apr 30 03:29:15.423141 kubelet[3113]: I0430 03:29:15.423054 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-a-afe39379c7" podStartSLOduration=1.4230187 podStartE2EDuration="1.4230187s" podCreationTimestamp="2025-04-30 03:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:15.423002 +0000 UTC m=+1.168270255" watchObservedRunningTime="2025-04-30 03:29:15.4230187 +0000 UTC m=+1.168287055" Apr 30 03:29:15.435214 kubelet[3113]: I0430 03:29:15.435137 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-afe39379c7" podStartSLOduration=1.435114483 podStartE2EDuration="1.435114483s" podCreationTimestamp="2025-04-30 03:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:15.435056382 +0000 UTC m=+1.180324737" watchObservedRunningTime="2025-04-30 03:29:15.435114483 +0000 UTC m=+1.180382738" Apr 30 03:29:15.469332 kubelet[3113]: I0430 03:29:15.469119 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-a-afe39379c7" podStartSLOduration=1.469095597 podStartE2EDuration="1.469095597s" podCreationTimestamp="2025-04-30 03:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:15.454153071 +0000 UTC m=+1.199421426" watchObservedRunningTime="2025-04-30 03:29:15.469095597 +0000 UTC m=+1.214363852" Apr 30 03:29:20.373451 kubelet[3113]: I0430 03:29:20.373417 3113 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:29:20.375565 containerd[1694]: time="2025-04-30T03:29:20.375521684Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:29:20.376007 kubelet[3113]: I0430 03:29:20.375741 3113 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:29:20.581584 sudo[2199]: pam_unix(sudo:session): session closed for user root Apr 30 03:29:20.682014 sshd[2196]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:20.685834 systemd[1]: sshd@6-10.200.8.38:22-10.200.16.10:44114.service: Deactivated successfully. Apr 30 03:29:20.689627 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:29:20.689851 systemd[1]: session-9.scope: Consumed 4.438s CPU time, 156.1M memory peak, 0B memory swap peak. Apr 30 03:29:20.691414 systemd-logind[1678]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:29:20.692368 systemd-logind[1678]: Removed session 9. Apr 30 03:29:21.249438 systemd[1]: Created slice kubepods-besteffort-podc65c0ab9_ec87_4e3a_9086_4ea2bc6a726c.slice - libcontainer container kubepods-besteffort-podc65c0ab9_ec87_4e3a_9086_4ea2bc6a726c.slice. Apr 30 03:29:21.396476 kubelet[3113]: I0430 03:29:21.396310 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c65c0ab9-ec87-4e3a-9086-4ea2bc6a726c-kube-proxy\") pod \"kube-proxy-bkfpm\" (UID: \"c65c0ab9-ec87-4e3a-9086-4ea2bc6a726c\") " pod="kube-system/kube-proxy-bkfpm" Apr 30 03:29:21.396476 kubelet[3113]: I0430 03:29:21.396375 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq2jb\" (UniqueName: \"kubernetes.io/projected/c65c0ab9-ec87-4e3a-9086-4ea2bc6a726c-kube-api-access-zq2jb\") pod \"kube-proxy-bkfpm\" (UID: \"c65c0ab9-ec87-4e3a-9086-4ea2bc6a726c\") " pod="kube-system/kube-proxy-bkfpm" Apr 30 03:29:21.396476 kubelet[3113]: I0430 03:29:21.396406 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c65c0ab9-ec87-4e3a-9086-4ea2bc6a726c-lib-modules\") pod \"kube-proxy-bkfpm\" (UID: \"c65c0ab9-ec87-4e3a-9086-4ea2bc6a726c\") " pod="kube-system/kube-proxy-bkfpm" Apr 30 03:29:21.397419 kubelet[3113]: I0430 03:29:21.396428 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c65c0ab9-ec87-4e3a-9086-4ea2bc6a726c-xtables-lock\") pod \"kube-proxy-bkfpm\" (UID: \"c65c0ab9-ec87-4e3a-9086-4ea2bc6a726c\") " pod="kube-system/kube-proxy-bkfpm" Apr 30 03:29:21.424136 systemd[1]: Created slice kubepods-besteffort-pod749c00b5_6ca3_4f90_9202_f44e3066e333.slice - libcontainer container kubepods-besteffort-pod749c00b5_6ca3_4f90_9202_f44e3066e333.slice. Apr 30 03:29:21.559124 containerd[1694]: time="2025-04-30T03:29:21.558955959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bkfpm,Uid:c65c0ab9-ec87-4e3a-9086-4ea2bc6a726c,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:21.597615 containerd[1694]: time="2025-04-30T03:29:21.597403972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:21.598257 kubelet[3113]: I0430 03:29:21.598139 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/749c00b5-6ca3-4f90-9202-f44e3066e333-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-st6n9\" (UID: \"749c00b5-6ca3-4f90-9202-f44e3066e333\") " pod="tigera-operator/tigera-operator-6f6897fdc5-st6n9" Apr 30 03:29:21.598257 kubelet[3113]: I0430 03:29:21.598217 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnmkm\" (UniqueName: \"kubernetes.io/projected/749c00b5-6ca3-4f90-9202-f44e3066e333-kube-api-access-vnmkm\") pod \"tigera-operator-6f6897fdc5-st6n9\" (UID: \"749c00b5-6ca3-4f90-9202-f44e3066e333\") " pod="tigera-operator/tigera-operator-6f6897fdc5-st6n9" Apr 30 03:29:21.598411 containerd[1694]: time="2025-04-30T03:29:21.598372088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:21.598722 containerd[1694]: time="2025-04-30T03:29:21.598527790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:21.598880 containerd[1694]: time="2025-04-30T03:29:21.598745694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:21.622373 systemd[1]: run-containerd-runc-k8s.io-fe8207eb5c12ee1e73ec1e1a4918df4845579d624ba2fa951852c16b80faa0c3-runc.2K7me5.mount: Deactivated successfully. Apr 30 03:29:21.630376 systemd[1]: Started cri-containerd-fe8207eb5c12ee1e73ec1e1a4918df4845579d624ba2fa951852c16b80faa0c3.scope - libcontainer container fe8207eb5c12ee1e73ec1e1a4918df4845579d624ba2fa951852c16b80faa0c3. Apr 30 03:29:21.653783 containerd[1694]: time="2025-04-30T03:29:21.653731771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bkfpm,Uid:c65c0ab9-ec87-4e3a-9086-4ea2bc6a726c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe8207eb5c12ee1e73ec1e1a4918df4845579d624ba2fa951852c16b80faa0c3\"" Apr 30 03:29:21.657932 containerd[1694]: time="2025-04-30T03:29:21.657802836Z" level=info msg="CreateContainer within sandbox \"fe8207eb5c12ee1e73ec1e1a4918df4845579d624ba2fa951852c16b80faa0c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:29:21.698255 containerd[1694]: time="2025-04-30T03:29:21.698039077Z" level=info msg="CreateContainer within sandbox \"fe8207eb5c12ee1e73ec1e1a4918df4845579d624ba2fa951852c16b80faa0c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c20895a341865742a59f667bf98fd0c9d3f7929736fd7eea9509b7e00e770306\"" Apr 30 03:29:21.700285 containerd[1694]: time="2025-04-30T03:29:21.699936608Z" level=info msg="StartContainer for \"c20895a341865742a59f667bf98fd0c9d3f7929736fd7eea9509b7e00e770306\"" Apr 30 03:29:21.727331 containerd[1694]: time="2025-04-30T03:29:21.727265543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-st6n9,Uid:749c00b5-6ca3-4f90-9202-f44e3066e333,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:29:21.731381 systemd[1]: Started cri-containerd-c20895a341865742a59f667bf98fd0c9d3f7929736fd7eea9509b7e00e770306.scope - libcontainer container c20895a341865742a59f667bf98fd0c9d3f7929736fd7eea9509b7e00e770306. Apr 30 03:29:21.762827 containerd[1694]: time="2025-04-30T03:29:21.762664908Z" level=info msg="StartContainer for \"c20895a341865742a59f667bf98fd0c9d3f7929736fd7eea9509b7e00e770306\" returns successfully" Apr 30 03:29:21.782472 containerd[1694]: time="2025-04-30T03:29:21.782278521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:21.782882 containerd[1694]: time="2025-04-30T03:29:21.782627126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:21.783049 containerd[1694]: time="2025-04-30T03:29:21.782863330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:21.785645 containerd[1694]: time="2025-04-30T03:29:21.783888747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:21.809526 systemd[1]: Started cri-containerd-c8cb7e8ebddf0826ed8159b3813c0e06da4cc60e3726987e2704fb47c357ce32.scope - libcontainer container c8cb7e8ebddf0826ed8159b3813c0e06da4cc60e3726987e2704fb47c357ce32. Apr 30 03:29:21.866938 containerd[1694]: time="2025-04-30T03:29:21.866110958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-st6n9,Uid:749c00b5-6ca3-4f90-9202-f44e3066e333,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c8cb7e8ebddf0826ed8159b3813c0e06da4cc60e3726987e2704fb47c357ce32\"" Apr 30 03:29:21.874359 containerd[1694]: time="2025-04-30T03:29:21.874086085Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:29:22.421198 kubelet[3113]: I0430 03:29:22.420560 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bkfpm" podStartSLOduration=1.4205366 podStartE2EDuration="1.4205366s" podCreationTimestamp="2025-04-30 03:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:22.420254896 +0000 UTC m=+8.165523251" watchObservedRunningTime="2025-04-30 03:29:22.4205366 +0000 UTC m=+8.165804955" Apr 30 03:29:23.429462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount339481744.mount: Deactivated successfully. Apr 30 03:29:23.956086 containerd[1694]: time="2025-04-30T03:29:23.956030390Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:23.958805 containerd[1694]: time="2025-04-30T03:29:23.958616731Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:29:23.961139 containerd[1694]: time="2025-04-30T03:29:23.960928768Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:23.964219 containerd[1694]: time="2025-04-30T03:29:23.964170420Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:23.964864 containerd[1694]: time="2025-04-30T03:29:23.964827430Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.090695344s" Apr 30 03:29:23.964938 containerd[1694]: time="2025-04-30T03:29:23.964869431Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:29:23.967798 containerd[1694]: time="2025-04-30T03:29:23.967771777Z" level=info msg="CreateContainer within sandbox \"c8cb7e8ebddf0826ed8159b3813c0e06da4cc60e3726987e2704fb47c357ce32\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:29:23.999961 containerd[1694]: time="2025-04-30T03:29:23.999913190Z" level=info msg="CreateContainer within sandbox \"c8cb7e8ebddf0826ed8159b3813c0e06da4cc60e3726987e2704fb47c357ce32\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"16f11f896bee8906484fd618b1f7cd6159e6aacb9dc1319ee9622bc247078ec8\"" Apr 30 03:29:24.000597 containerd[1694]: time="2025-04-30T03:29:24.000547100Z" level=info msg="StartContainer for \"16f11f896bee8906484fd618b1f7cd6159e6aacb9dc1319ee9622bc247078ec8\"" Apr 30 03:29:24.040376 systemd[1]: Started cri-containerd-16f11f896bee8906484fd618b1f7cd6159e6aacb9dc1319ee9622bc247078ec8.scope - libcontainer container 16f11f896bee8906484fd618b1f7cd6159e6aacb9dc1319ee9622bc247078ec8. Apr 30 03:29:24.068441 containerd[1694]: time="2025-04-30T03:29:24.068389982Z" level=info msg="StartContainer for \"16f11f896bee8906484fd618b1f7cd6159e6aacb9dc1319ee9622bc247078ec8\" returns successfully" Apr 30 03:29:27.226317 kubelet[3113]: I0430 03:29:27.226241 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-st6n9" podStartSLOduration=4.129412901 podStartE2EDuration="6.226180742s" podCreationTimestamp="2025-04-30 03:29:21 +0000 UTC" firstStartedPulling="2025-04-30 03:29:21.869126906 +0000 UTC m=+7.614395161" lastFinishedPulling="2025-04-30 03:29:23.965894747 +0000 UTC m=+9.711163002" observedRunningTime="2025-04-30 03:29:24.426163388 +0000 UTC m=+10.171431643" watchObservedRunningTime="2025-04-30 03:29:27.226180742 +0000 UTC m=+12.971449097" Apr 30 03:29:27.235973 kubelet[3113]: I0430 03:29:27.235940 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b68d4\" (UniqueName: \"kubernetes.io/projected/0d1c928f-7e28-4f5a-a5a4-a04d4b90f9ad-kube-api-access-b68d4\") pod \"calico-typha-599578f945-lfhcw\" (UID: \"0d1c928f-7e28-4f5a-a5a4-a04d4b90f9ad\") " pod="calico-system/calico-typha-599578f945-lfhcw" Apr 30 03:29:27.236111 kubelet[3113]: I0430 03:29:27.236041 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d1c928f-7e28-4f5a-a5a4-a04d4b90f9ad-tigera-ca-bundle\") pod \"calico-typha-599578f945-lfhcw\" (UID: \"0d1c928f-7e28-4f5a-a5a4-a04d4b90f9ad\") " pod="calico-system/calico-typha-599578f945-lfhcw" Apr 30 03:29:27.236111 kubelet[3113]: I0430 03:29:27.236067 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0d1c928f-7e28-4f5a-a5a4-a04d4b90f9ad-typha-certs\") pod \"calico-typha-599578f945-lfhcw\" (UID: \"0d1c928f-7e28-4f5a-a5a4-a04d4b90f9ad\") " pod="calico-system/calico-typha-599578f945-lfhcw" Apr 30 03:29:27.240731 systemd[1]: Created slice kubepods-besteffort-pod0d1c928f_7e28_4f5a_a5a4_a04d4b90f9ad.slice - libcontainer container kubepods-besteffort-pod0d1c928f_7e28_4f5a_a5a4_a04d4b90f9ad.slice. Apr 30 03:29:27.279552 systemd[1]: Created slice kubepods-besteffort-pod13d38367_102b_4fbe_8250_b9849599ce07.slice - libcontainer container kubepods-besteffort-pod13d38367_102b_4fbe_8250_b9849599ce07.slice. Apr 30 03:29:27.336624 kubelet[3113]: I0430 03:29:27.336554 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13d38367-102b-4fbe-8250-b9849599ce07-lib-modules\") pod \"calico-node-p4h8j\" (UID: \"13d38367-102b-4fbe-8250-b9849599ce07\") " pod="calico-system/calico-node-p4h8j" Apr 30 03:29:27.339363 kubelet[3113]: I0430 03:29:27.336640 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/13d38367-102b-4fbe-8250-b9849599ce07-node-certs\") pod \"calico-node-p4h8j\" (UID: \"13d38367-102b-4fbe-8250-b9849599ce07\") " pod="calico-system/calico-node-p4h8j" Apr 30 03:29:27.339363 kubelet[3113]: I0430 03:29:27.336662 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/13d38367-102b-4fbe-8250-b9849599ce07-cni-bin-dir\") pod \"calico-node-p4h8j\" (UID: \"13d38367-102b-4fbe-8250-b9849599ce07\") " pod="calico-system/calico-node-p4h8j" Apr 30 03:29:27.339363 kubelet[3113]: I0430 03:29:27.336755 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13d38367-102b-4fbe-8250-b9849599ce07-tigera-ca-bundle\") pod \"calico-node-p4h8j\" (UID: \"13d38367-102b-4fbe-8250-b9849599ce07\") " pod="calico-system/calico-node-p4h8j" Apr 30 03:29:27.339363 kubelet[3113]: I0430 03:29:27.336779 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/13d38367-102b-4fbe-8250-b9849599ce07-cni-net-dir\") pod \"calico-node-p4h8j\" (UID: \"13d38367-102b-4fbe-8250-b9849599ce07\") " pod="calico-system/calico-node-p4h8j" Apr 30 03:29:27.339363 kubelet[3113]: I0430 03:29:27.336799 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/13d38367-102b-4fbe-8250-b9849599ce07-policysync\") pod \"calico-node-p4h8j\" (UID: \"13d38367-102b-4fbe-8250-b9849599ce07\") " pod="calico-system/calico-node-p4h8j" Apr 30 03:29:27.339617 kubelet[3113]: I0430 03:29:27.336818 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/13d38367-102b-4fbe-8250-b9849599ce07-cni-log-dir\") pod \"calico-node-p4h8j\" (UID: \"13d38367-102b-4fbe-8250-b9849599ce07\") " pod="calico-system/calico-node-p4h8j" Apr 30 03:29:27.339617 kubelet[3113]: I0430 03:29:27.336858 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9v4\" (UniqueName: \"kubernetes.io/projected/13d38367-102b-4fbe-8250-b9849599ce07-kube-api-access-hw9v4\") pod \"calico-node-p4h8j\" (UID: \"13d38367-102b-4fbe-8250-b9849599ce07\") " pod="calico-system/calico-node-p4h8j" Apr 30 03:29:27.339617 kubelet[3113]: I0430 03:29:27.336886 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/13d38367-102b-4fbe-8250-b9849599ce07-flexvol-driver-host\") pod \"calico-node-p4h8j\" (UID: \"13d38367-102b-4fbe-8250-b9849599ce07\") " pod="calico-system/calico-node-p4h8j" Apr 30 03:29:27.339617 kubelet[3113]: I0430 03:29:27.336909 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13d38367-102b-4fbe-8250-b9849599ce07-xtables-lock\") pod \"calico-node-p4h8j\" (UID: \"13d38367-102b-4fbe-8250-b9849599ce07\") " pod="calico-system/calico-node-p4h8j" Apr 30 03:29:27.339617 kubelet[3113]: I0430 03:29:27.336929 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/13d38367-102b-4fbe-8250-b9849599ce07-var-run-calico\") pod \"calico-node-p4h8j\" (UID: \"13d38367-102b-4fbe-8250-b9849599ce07\") " pod="calico-system/calico-node-p4h8j" Apr 30 03:29:27.339799 kubelet[3113]: I0430 03:29:27.336951 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13d38367-102b-4fbe-8250-b9849599ce07-var-lib-calico\") pod \"calico-node-p4h8j\" (UID: \"13d38367-102b-4fbe-8250-b9849599ce07\") " pod="calico-system/calico-node-p4h8j" Apr 30 03:29:27.405776 kubelet[3113]: E0430 03:29:27.405723 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f8qxp" podUID="7ca18dcc-f415-45c0-be2d-91e3486ac03d" Apr 30 03:29:27.438033 kubelet[3113]: E0430 03:29:27.437840 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.438033 kubelet[3113]: W0430 03:29:27.437869 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.438033 kubelet[3113]: E0430 03:29:27.437906 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.438497 kubelet[3113]: E0430 03:29:27.438381 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.438497 kubelet[3113]: W0430 03:29:27.438396 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.438801 kubelet[3113]: E0430 03:29:27.438624 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.439107 kubelet[3113]: E0430 03:29:27.439000 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.439107 kubelet[3113]: W0430 03:29:27.439016 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.439107 kubelet[3113]: E0430 03:29:27.439054 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.439709 kubelet[3113]: E0430 03:29:27.439481 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.439709 kubelet[3113]: W0430 03:29:27.439496 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.439709 kubelet[3113]: E0430 03:29:27.439632 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.440759 kubelet[3113]: E0430 03:29:27.440496 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.440759 kubelet[3113]: W0430 03:29:27.440508 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.440759 kubelet[3113]: E0430 03:29:27.440576 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.440759 kubelet[3113]: I0430 03:29:27.440602 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7ca18dcc-f415-45c0-be2d-91e3486ac03d-varrun\") pod \"csi-node-driver-f8qxp\" (UID: \"7ca18dcc-f415-45c0-be2d-91e3486ac03d\") " pod="calico-system/csi-node-driver-f8qxp" Apr 30 03:29:27.442356 kubelet[3113]: E0430 03:29:27.442156 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.442356 kubelet[3113]: W0430 03:29:27.442174 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.443279 kubelet[3113]: E0430 03:29:27.442742 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.443279 kubelet[3113]: W0430 03:29:27.442757 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.443760 kubelet[3113]: E0430 03:29:27.443445 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.443760 kubelet[3113]: W0430 03:29:27.443461 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.443760 kubelet[3113]: E0430 03:29:27.443486 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.445395 kubelet[3113]: E0430 03:29:27.444173 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.445395 kubelet[3113]: W0430 03:29:27.444267 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.445395 kubelet[3113]: E0430 03:29:27.444286 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.445771 kubelet[3113]: E0430 03:29:27.445618 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.445771 kubelet[3113]: E0430 03:29:27.445687 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.446141 kubelet[3113]: E0430 03:29:27.446030 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.446141 kubelet[3113]: W0430 03:29:27.446045 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.446141 kubelet[3113]: E0430 03:29:27.446060 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.446528 kubelet[3113]: E0430 03:29:27.446413 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.446528 kubelet[3113]: W0430 03:29:27.446426 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.446528 kubelet[3113]: E0430 03:29:27.446439 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.446844 kubelet[3113]: E0430 03:29:27.446724 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.446844 kubelet[3113]: W0430 03:29:27.446734 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.446844 kubelet[3113]: E0430 03:29:27.446749 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.447070 kubelet[3113]: E0430 03:29:27.447030 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.447070 kubelet[3113]: W0430 03:29:27.447043 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.447379 kubelet[3113]: E0430 03:29:27.447055 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.447731 kubelet[3113]: E0430 03:29:27.447485 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.447731 kubelet[3113]: W0430 03:29:27.447499 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.447731 kubelet[3113]: E0430 03:29:27.447513 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.447731 kubelet[3113]: I0430 03:29:27.447540 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjxk2\" (UniqueName: \"kubernetes.io/projected/7ca18dcc-f415-45c0-be2d-91e3486ac03d-kube-api-access-wjxk2\") pod \"csi-node-driver-f8qxp\" (UID: \"7ca18dcc-f415-45c0-be2d-91e3486ac03d\") " pod="calico-system/csi-node-driver-f8qxp" Apr 30 03:29:27.448138 kubelet[3113]: E0430 03:29:27.447947 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.448138 kubelet[3113]: W0430 03:29:27.447961 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.448138 kubelet[3113]: E0430 03:29:27.447974 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.450393 kubelet[3113]: E0430 03:29:27.450253 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.450393 kubelet[3113]: W0430 03:29:27.450269 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.450393 kubelet[3113]: E0430 03:29:27.450282 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.451158 kubelet[3113]: E0430 03:29:27.451016 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.451158 kubelet[3113]: W0430 03:29:27.451038 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.451158 kubelet[3113]: E0430 03:29:27.451052 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.451543 kubelet[3113]: E0430 03:29:27.451391 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.451543 kubelet[3113]: W0430 03:29:27.451405 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.451543 kubelet[3113]: E0430 03:29:27.451419 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.451543 kubelet[3113]: I0430 03:29:27.451443 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ca18dcc-f415-45c0-be2d-91e3486ac03d-kubelet-dir\") pod \"csi-node-driver-f8qxp\" (UID: \"7ca18dcc-f415-45c0-be2d-91e3486ac03d\") " pod="calico-system/csi-node-driver-f8qxp" Apr 30 03:29:27.451961 kubelet[3113]: E0430 03:29:27.451801 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.451961 kubelet[3113]: W0430 03:29:27.451825 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.451961 kubelet[3113]: E0430 03:29:27.451839 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.451961 kubelet[3113]: I0430 03:29:27.451862 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7ca18dcc-f415-45c0-be2d-91e3486ac03d-registration-dir\") pod \"csi-node-driver-f8qxp\" (UID: \"7ca18dcc-f415-45c0-be2d-91e3486ac03d\") " pod="calico-system/csi-node-driver-f8qxp" Apr 30 03:29:27.452349 kubelet[3113]: E0430 03:29:27.452251 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.452349 kubelet[3113]: W0430 03:29:27.452267 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.452349 kubelet[3113]: E0430 03:29:27.452293 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.452963 kubelet[3113]: E0430 03:29:27.452792 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.452963 kubelet[3113]: W0430 03:29:27.452808 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.452963 kubelet[3113]: E0430 03:29:27.452909 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.453579 kubelet[3113]: E0430 03:29:27.453395 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.453579 kubelet[3113]: W0430 03:29:27.453411 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.453579 kubelet[3113]: E0430 03:29:27.453502 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.453891 kubelet[3113]: E0430 03:29:27.453794 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.453891 kubelet[3113]: W0430 03:29:27.453809 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.454100 kubelet[3113]: E0430 03:29:27.454003 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.454234 kubelet[3113]: E0430 03:29:27.454222 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.454729 kubelet[3113]: W0430 03:29:27.454568 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.456342 kubelet[3113]: E0430 03:29:27.456218 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.456589 kubelet[3113]: E0430 03:29:27.456491 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.456589 kubelet[3113]: W0430 03:29:27.456504 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.456764 kubelet[3113]: E0430 03:29:27.456682 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.456941 kubelet[3113]: E0430 03:29:27.456854 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.456941 kubelet[3113]: W0430 03:29:27.456865 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.457131 kubelet[3113]: E0430 03:29:27.457049 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.457301 kubelet[3113]: E0430 03:29:27.457290 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.457397 kubelet[3113]: W0430 03:29:27.457371 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.457578 kubelet[3113]: E0430 03:29:27.457564 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.457746 kubelet[3113]: E0430 03:29:27.457649 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.457944 kubelet[3113]: W0430 03:29:27.457801 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.458036 kubelet[3113]: E0430 03:29:27.458021 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.458240 kubelet[3113]: E0430 03:29:27.458140 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.458240 kubelet[3113]: W0430 03:29:27.458151 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.458427 kubelet[3113]: E0430 03:29:27.458364 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.458652 kubelet[3113]: E0430 03:29:27.458639 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.458930 kubelet[3113]: W0430 03:29:27.458813 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.458930 kubelet[3113]: E0430 03:29:27.458884 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.459415 kubelet[3113]: E0430 03:29:27.459336 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.459415 kubelet[3113]: W0430 03:29:27.459351 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.459558 kubelet[3113]: E0430 03:29:27.459442 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.461267 kubelet[3113]: E0430 03:29:27.459777 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.461267 kubelet[3113]: W0430 03:29:27.459792 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.461515 kubelet[3113]: E0430 03:29:27.461446 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.462360 kubelet[3113]: E0430 03:29:27.462343 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.462360 kubelet[3113]: W0430 03:29:27.462358 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.462560 kubelet[3113]: E0430 03:29:27.462419 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.462560 kubelet[3113]: I0430 03:29:27.462456 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7ca18dcc-f415-45c0-be2d-91e3486ac03d-socket-dir\") pod \"csi-node-driver-f8qxp\" (UID: \"7ca18dcc-f415-45c0-be2d-91e3486ac03d\") " pod="calico-system/csi-node-driver-f8qxp" Apr 30 03:29:27.462560 kubelet[3113]: E0430 03:29:27.462549 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.462560 kubelet[3113]: W0430 03:29:27.462558 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.462764 kubelet[3113]: E0430 03:29:27.462728 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.462764 kubelet[3113]: W0430 03:29:27.462737 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.463087 kubelet[3113]: E0430 03:29:27.462883 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.463087 kubelet[3113]: E0430 03:29:27.462894 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.463087 kubelet[3113]: E0430 03:29:27.462901 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.463087 kubelet[3113]: W0430 03:29:27.462909 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.463087 kubelet[3113]: E0430 03:29:27.462990 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.463350 kubelet[3113]: E0430 03:29:27.463239 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.463350 kubelet[3113]: W0430 03:29:27.463250 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.464239 kubelet[3113]: E0430 03:29:27.464215 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.464427 kubelet[3113]: E0430 03:29:27.464410 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.464427 kubelet[3113]: W0430 03:29:27.464426 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.464514 kubelet[3113]: E0430 03:29:27.464444 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.464709 kubelet[3113]: E0430 03:29:27.464693 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.464709 kubelet[3113]: W0430 03:29:27.464709 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.464815 kubelet[3113]: E0430 03:29:27.464800 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.467210 kubelet[3113]: E0430 03:29:27.464929 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.467210 kubelet[3113]: W0430 03:29:27.464941 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.467210 kubelet[3113]: E0430 03:29:27.465016 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.467210 kubelet[3113]: E0430 03:29:27.465132 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.467210 kubelet[3113]: W0430 03:29:27.465139 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.467210 kubelet[3113]: E0430 03:29:27.465205 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.467210 kubelet[3113]: E0430 03:29:27.465354 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.467210 kubelet[3113]: W0430 03:29:27.465362 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.467210 kubelet[3113]: E0430 03:29:27.465431 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.467210 kubelet[3113]: E0430 03:29:27.465546 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.467668 kubelet[3113]: W0430 03:29:27.465553 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.467668 kubelet[3113]: E0430 03:29:27.467213 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.467668 kubelet[3113]: E0430 03:29:27.467367 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.467668 kubelet[3113]: W0430 03:29:27.467376 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.467668 kubelet[3113]: E0430 03:29:27.467456 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.467668 kubelet[3113]: E0430 03:29:27.467598 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.467668 kubelet[3113]: W0430 03:29:27.467607 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.467979 kubelet[3113]: E0430 03:29:27.467684 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.467979 kubelet[3113]: E0430 03:29:27.467814 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.467979 kubelet[3113]: W0430 03:29:27.467823 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.467979 kubelet[3113]: E0430 03:29:27.467913 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.468156 kubelet[3113]: E0430 03:29:27.468062 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.468156 kubelet[3113]: W0430 03:29:27.468071 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.468156 kubelet[3113]: E0430 03:29:27.468149 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.468336 kubelet[3113]: E0430 03:29:27.468319 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.468336 kubelet[3113]: W0430 03:29:27.468335 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.468434 kubelet[3113]: E0430 03:29:27.468419 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.468561 kubelet[3113]: E0430 03:29:27.468545 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.468561 kubelet[3113]: W0430 03:29:27.468559 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.468683 kubelet[3113]: E0430 03:29:27.468667 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.468829 kubelet[3113]: E0430 03:29:27.468813 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.468829 kubelet[3113]: W0430 03:29:27.468829 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.468926 kubelet[3113]: E0430 03:29:27.468907 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.469066 kubelet[3113]: E0430 03:29:27.469052 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.469126 kubelet[3113]: W0430 03:29:27.469067 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.469177 kubelet[3113]: E0430 03:29:27.469146 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.469328 kubelet[3113]: E0430 03:29:27.469312 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.469328 kubelet[3113]: W0430 03:29:27.469326 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.470332 kubelet[3113]: E0430 03:29:27.470310 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.470501 kubelet[3113]: E0430 03:29:27.470484 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.470501 kubelet[3113]: W0430 03:29:27.470500 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.470606 kubelet[3113]: E0430 03:29:27.470584 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.470756 kubelet[3113]: E0430 03:29:27.470740 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.470756 kubelet[3113]: W0430 03:29:27.470755 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.470854 kubelet[3113]: E0430 03:29:27.470832 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.470983 kubelet[3113]: E0430 03:29:27.470968 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.470983 kubelet[3113]: W0430 03:29:27.470982 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.471088 kubelet[3113]: E0430 03:29:27.471058 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.471228 kubelet[3113]: E0430 03:29:27.471213 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.471228 kubelet[3113]: W0430 03:29:27.471227 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.471328 kubelet[3113]: E0430 03:29:27.471304 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.472204 kubelet[3113]: E0430 03:29:27.471491 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.472204 kubelet[3113]: W0430 03:29:27.471502 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.472204 kubelet[3113]: E0430 03:29:27.471639 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.472204 kubelet[3113]: E0430 03:29:27.471693 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.472204 kubelet[3113]: W0430 03:29:27.471699 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.472204 kubelet[3113]: E0430 03:29:27.471840 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.473372 kubelet[3113]: E0430 03:29:27.473349 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.473372 kubelet[3113]: W0430 03:29:27.473368 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.473599 kubelet[3113]: E0430 03:29:27.473480 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.473663 kubelet[3113]: E0430 03:29:27.473630 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.473663 kubelet[3113]: W0430 03:29:27.473640 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.473751 kubelet[3113]: E0430 03:29:27.473710 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.474695 kubelet[3113]: E0430 03:29:27.473871 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.474695 kubelet[3113]: W0430 03:29:27.473884 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.474695 kubelet[3113]: E0430 03:29:27.473966 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.475329 kubelet[3113]: E0430 03:29:27.475308 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.475329 kubelet[3113]: W0430 03:29:27.475328 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.475721 kubelet[3113]: E0430 03:29:27.475697 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.475721 kubelet[3113]: W0430 03:29:27.475715 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.476420 kubelet[3113]: E0430 03:29:27.476334 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.476420 kubelet[3113]: W0430 03:29:27.476352 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.482909 kubelet[3113]: E0430 03:29:27.482881 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.482996 kubelet[3113]: E0430 03:29:27.482913 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.482996 kubelet[3113]: E0430 03:29:27.482935 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.483998 kubelet[3113]: E0430 03:29:27.483863 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.483998 kubelet[3113]: W0430 03:29:27.483879 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.485437 kubelet[3113]: E0430 03:29:27.485340 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.485437 kubelet[3113]: W0430 03:29:27.485356 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.485765 kubelet[3113]: E0430 03:29:27.485679 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.485765 kubelet[3113]: W0430 03:29:27.485692 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.485999 kubelet[3113]: E0430 03:29:27.485986 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.486136 kubelet[3113]: W0430 03:29:27.486063 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.486365 kubelet[3113]: E0430 03:29:27.486353 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.488215 kubelet[3113]: W0430 03:29:27.486441 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.488215 kubelet[3113]: E0430 03:29:27.486460 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.488543 kubelet[3113]: E0430 03:29:27.488525 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.488638 kubelet[3113]: E0430 03:29:27.488625 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.488710 kubelet[3113]: E0430 03:29:27.488700 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.488969 kubelet[3113]: E0430 03:29:27.488955 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.489052 kubelet[3113]: W0430 03:29:27.489041 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.491252 kubelet[3113]: E0430 03:29:27.491231 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.491696 kubelet[3113]: E0430 03:29:27.491683 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.491789 kubelet[3113]: W0430 03:29:27.491778 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.491860 kubelet[3113]: E0430 03:29:27.491850 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.494466 kubelet[3113]: E0430 03:29:27.494443 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.494776 kubelet[3113]: E0430 03:29:27.494760 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.494776 kubelet[3113]: W0430 03:29:27.494776 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.494890 kubelet[3113]: E0430 03:29:27.494790 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.518407 kubelet[3113]: E0430 03:29:27.518377 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.518407 kubelet[3113]: W0430 03:29:27.518403 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.518608 kubelet[3113]: E0430 03:29:27.518429 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.548115 containerd[1694]: time="2025-04-30T03:29:27.547646594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599578f945-lfhcw,Uid:0d1c928f-7e28-4f5a-a5a4-a04d4b90f9ad,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:27.578004 kubelet[3113]: E0430 03:29:27.577453 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.578004 kubelet[3113]: W0430 03:29:27.577483 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.578004 kubelet[3113]: E0430 03:29:27.577524 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.578004 kubelet[3113]: E0430 03:29:27.577851 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.578004 kubelet[3113]: W0430 03:29:27.577867 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.578004 kubelet[3113]: E0430 03:29:27.577885 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.581404 kubelet[3113]: E0430 03:29:27.580573 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.581404 kubelet[3113]: W0430 03:29:27.580590 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.581404 kubelet[3113]: E0430 03:29:27.580803 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.582358 kubelet[3113]: E0430 03:29:27.582066 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.582358 kubelet[3113]: W0430 03:29:27.582081 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.582358 kubelet[3113]: E0430 03:29:27.582136 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.583876 kubelet[3113]: E0430 03:29:27.583594 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.583876 kubelet[3113]: W0430 03:29:27.583608 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.583876 kubelet[3113]: E0430 03:29:27.583647 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.585569 kubelet[3113]: E0430 03:29:27.584363 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.585569 kubelet[3113]: W0430 03:29:27.584377 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.585569 kubelet[3113]: E0430 03:29:27.585366 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.585569 kubelet[3113]: E0430 03:29:27.585436 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.585569 kubelet[3113]: W0430 03:29:27.585445 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.585569 kubelet[3113]: E0430 03:29:27.585482 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.586619 kubelet[3113]: E0430 03:29:27.586603 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.586743 kubelet[3113]: W0430 03:29:27.586730 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.586957 kubelet[3113]: E0430 03:29:27.586935 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.589965 kubelet[3113]: E0430 03:29:27.589515 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.589965 kubelet[3113]: W0430 03:29:27.589530 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.589965 kubelet[3113]: E0430 03:29:27.589632 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.589965 kubelet[3113]: E0430 03:29:27.589793 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.589965 kubelet[3113]: W0430 03:29:27.589803 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.591318 kubelet[3113]: E0430 03:29:27.590242 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.591318 kubelet[3113]: E0430 03:29:27.591268 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.591318 kubelet[3113]: W0430 03:29:27.591282 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.591474 containerd[1694]: time="2025-04-30T03:29:27.590503901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p4h8j,Uid:13d38367-102b-4fbe-8250-b9849599ce07,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:27.592095 kubelet[3113]: E0430 03:29:27.592076 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.592328 kubelet[3113]: E0430 03:29:27.592269 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.592429 kubelet[3113]: W0430 03:29:27.592415 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.593712 kubelet[3113]: E0430 03:29:27.593544 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.594340 kubelet[3113]: E0430 03:29:27.594319 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.594422 kubelet[3113]: W0430 03:29:27.594356 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.595207 kubelet[3113]: E0430 03:29:27.594486 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.595207 kubelet[3113]: E0430 03:29:27.594679 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.595207 kubelet[3113]: W0430 03:29:27.594691 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.595207 kubelet[3113]: E0430 03:29:27.594762 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.595207 kubelet[3113]: E0430 03:29:27.595083 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.595207 kubelet[3113]: W0430 03:29:27.595092 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.595207 kubelet[3113]: E0430 03:29:27.595151 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.595550 kubelet[3113]: E0430 03:29:27.595407 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.595550 kubelet[3113]: W0430 03:29:27.595418 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.595550 kubelet[3113]: E0430 03:29:27.595527 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.596483 kubelet[3113]: E0430 03:29:27.595699 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.596483 kubelet[3113]: W0430 03:29:27.595713 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.596483 kubelet[3113]: E0430 03:29:27.595729 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.596483 kubelet[3113]: E0430 03:29:27.596099 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.596483 kubelet[3113]: W0430 03:29:27.596111 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.596483 kubelet[3113]: E0430 03:29:27.596226 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.596483 kubelet[3113]: E0430 03:29:27.596482 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.599992 kubelet[3113]: W0430 03:29:27.596494 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.599992 kubelet[3113]: E0430 03:29:27.596591 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.599992 kubelet[3113]: E0430 03:29:27.597356 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.599992 kubelet[3113]: W0430 03:29:27.597369 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.599992 kubelet[3113]: E0430 03:29:27.597697 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.599992 kubelet[3113]: E0430 03:29:27.598097 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.599992 kubelet[3113]: W0430 03:29:27.598109 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.599992 kubelet[3113]: E0430 03:29:27.598428 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.599992 kubelet[3113]: E0430 03:29:27.598856 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.599992 kubelet[3113]: W0430 03:29:27.599132 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.600411 kubelet[3113]: E0430 03:29:27.599257 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.601373 kubelet[3113]: E0430 03:29:27.601356 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.601581 kubelet[3113]: W0430 03:29:27.601462 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.601692 kubelet[3113]: E0430 03:29:27.601660 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.602928 kubelet[3113]: E0430 03:29:27.602766 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.602928 kubelet[3113]: W0430 03:29:27.602783 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.602928 kubelet[3113]: E0430 03:29:27.602798 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.603167 kubelet[3113]: E0430 03:29:27.603155 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.603509 kubelet[3113]: W0430 03:29:27.603250 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.603509 kubelet[3113]: E0430 03:29:27.603268 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.623694 containerd[1694]: time="2025-04-30T03:29:27.620533626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:27.623694 containerd[1694]: time="2025-04-30T03:29:27.620591627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:27.623694 containerd[1694]: time="2025-04-30T03:29:27.620612927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:27.623694 containerd[1694]: time="2025-04-30T03:29:27.620710628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:27.626148 kubelet[3113]: E0430 03:29:27.626124 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:27.626380 kubelet[3113]: W0430 03:29:27.626358 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:27.626498 kubelet[3113]: E0430 03:29:27.626483 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:27.660535 systemd[1]: Started cri-containerd-e799b152739c693e95c613ebbccf5fc5dea5670842a8a205d8b3ab02ba5d2f76.scope - libcontainer container e799b152739c693e95c613ebbccf5fc5dea5670842a8a205d8b3ab02ba5d2f76. Apr 30 03:29:27.682026 containerd[1694]: time="2025-04-30T03:29:27.677681135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:27.682026 containerd[1694]: time="2025-04-30T03:29:27.680691778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:27.682026 containerd[1694]: time="2025-04-30T03:29:27.680711078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:27.682026 containerd[1694]: time="2025-04-30T03:29:27.680795379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:27.709718 systemd[1]: Started cri-containerd-70fb514047dfc5ac67f51f580640da848f2de762f718155b4b5b5d6976d51182.scope - libcontainer container 70fb514047dfc5ac67f51f580640da848f2de762f718155b4b5b5d6976d51182. Apr 30 03:29:27.760000 containerd[1694]: time="2025-04-30T03:29:27.759707197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p4h8j,Uid:13d38367-102b-4fbe-8250-b9849599ce07,Namespace:calico-system,Attempt:0,} returns sandbox id \"70fb514047dfc5ac67f51f580640da848f2de762f718155b4b5b5d6976d51182\"" Apr 30 03:29:27.769576 containerd[1694]: time="2025-04-30T03:29:27.768849426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:29:27.773529 containerd[1694]: time="2025-04-30T03:29:27.773494092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599578f945-lfhcw,Uid:0d1c928f-7e28-4f5a-a5a4-a04d4b90f9ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"e799b152739c693e95c613ebbccf5fc5dea5670842a8a205d8b3ab02ba5d2f76\"" Apr 30 03:29:28.040893 kubelet[3113]: E0430 03:29:28.040742 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.040893 kubelet[3113]: W0430 03:29:28.040767 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.040893 kubelet[3113]: E0430 03:29:28.040790 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.041524 kubelet[3113]: E0430 03:29:28.041008 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.041524 kubelet[3113]: W0430 03:29:28.041019 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.041524 kubelet[3113]: E0430 03:29:28.041034 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.041524 kubelet[3113]: E0430 03:29:28.041365 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.041524 kubelet[3113]: W0430 03:29:28.041377 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.041524 kubelet[3113]: E0430 03:29:28.041394 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.041939 kubelet[3113]: E0430 03:29:28.041618 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.041939 kubelet[3113]: W0430 03:29:28.041628 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.041939 kubelet[3113]: E0430 03:29:28.041638 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.042080 kubelet[3113]: E0430 03:29:28.042062 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.042080 kubelet[3113]: W0430 03:29:28.042076 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.042305 kubelet[3113]: E0430 03:29:28.042090 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.043666 kubelet[3113]: E0430 03:29:28.043366 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.043666 kubelet[3113]: W0430 03:29:28.043385 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.043666 kubelet[3113]: E0430 03:29:28.043400 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.044146 kubelet[3113]: E0430 03:29:28.043970 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.044146 kubelet[3113]: W0430 03:29:28.043988 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.044146 kubelet[3113]: E0430 03:29:28.044010 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.045040 kubelet[3113]: E0430 03:29:28.044903 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.045247 kubelet[3113]: W0430 03:29:28.044918 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.045247 kubelet[3113]: E0430 03:29:28.045160 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.045519 kubelet[3113]: E0430 03:29:28.045497 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.045519 kubelet[3113]: W0430 03:29:28.045515 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.045622 kubelet[3113]: E0430 03:29:28.045529 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.046260 kubelet[3113]: E0430 03:29:28.045996 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.046260 kubelet[3113]: W0430 03:29:28.046030 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.046260 kubelet[3113]: E0430 03:29:28.046043 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.048429 kubelet[3113]: E0430 03:29:28.046495 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.048429 kubelet[3113]: W0430 03:29:28.046507 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.048429 kubelet[3113]: E0430 03:29:28.046520 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.048429 kubelet[3113]: E0430 03:29:28.046820 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.048429 kubelet[3113]: W0430 03:29:28.046831 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.048429 kubelet[3113]: E0430 03:29:28.046845 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.048429 kubelet[3113]: E0430 03:29:28.048032 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.048429 kubelet[3113]: W0430 03:29:28.048144 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.048429 kubelet[3113]: E0430 03:29:28.048162 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.049124 kubelet[3113]: E0430 03:29:28.048922 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.049124 kubelet[3113]: W0430 03:29:28.048936 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.049124 kubelet[3113]: E0430 03:29:28.048965 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:28.049686 kubelet[3113]: E0430 03:29:28.049436 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:28.049686 kubelet[3113]: W0430 03:29:28.049446 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:28.049686 kubelet[3113]: E0430 03:29:28.049462 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:29.149513 containerd[1694]: time="2025-04-30T03:29:29.149456976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:29.153967 containerd[1694]: time="2025-04-30T03:29:29.153892138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:29:29.159430 containerd[1694]: time="2025-04-30T03:29:29.158937510Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:29.163565 containerd[1694]: time="2025-04-30T03:29:29.163329372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:29.164506 containerd[1694]: time="2025-04-30T03:29:29.164016282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.395121255s" Apr 30 03:29:29.164506 containerd[1694]: time="2025-04-30T03:29:29.164058482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:29:29.166023 containerd[1694]: time="2025-04-30T03:29:29.165997810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:29:29.167106 containerd[1694]: time="2025-04-30T03:29:29.167078725Z" level=info msg="CreateContainer within sandbox \"70fb514047dfc5ac67f51f580640da848f2de762f718155b4b5b5d6976d51182\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:29:29.205075 containerd[1694]: time="2025-04-30T03:29:29.204938661Z" level=info msg="CreateContainer within sandbox \"70fb514047dfc5ac67f51f580640da848f2de762f718155b4b5b5d6976d51182\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d2f0762c8617978ecb5657937fc5d9e3bd3023932c0bd774c7b730860bee0eee\"" Apr 30 03:29:29.207205 containerd[1694]: time="2025-04-30T03:29:29.205751773Z" level=info msg="StartContainer for \"d2f0762c8617978ecb5657937fc5d9e3bd3023932c0bd774c7b730860bee0eee\"" Apr 30 03:29:29.247915 systemd[1]: run-containerd-runc-k8s.io-d2f0762c8617978ecb5657937fc5d9e3bd3023932c0bd774c7b730860bee0eee-runc.NLSs24.mount: Deactivated successfully. Apr 30 03:29:29.254496 systemd[1]: Started cri-containerd-d2f0762c8617978ecb5657937fc5d9e3bd3023932c0bd774c7b730860bee0eee.scope - libcontainer container d2f0762c8617978ecb5657937fc5d9e3bd3023932c0bd774c7b730860bee0eee. Apr 30 03:29:29.285434 containerd[1694]: time="2025-04-30T03:29:29.285383100Z" level=info msg="StartContainer for \"d2f0762c8617978ecb5657937fc5d9e3bd3023932c0bd774c7b730860bee0eee\" returns successfully" Apr 30 03:29:29.298623 systemd[1]: cri-containerd-d2f0762c8617978ecb5657937fc5d9e3bd3023932c0bd774c7b730860bee0eee.scope: Deactivated successfully. Apr 30 03:29:29.346939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2f0762c8617978ecb5657937fc5d9e3bd3023932c0bd774c7b730860bee0eee-rootfs.mount: Deactivated successfully. Apr 30 03:29:29.360621 kubelet[3113]: E0430 03:29:29.360568 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f8qxp" podUID="7ca18dcc-f415-45c0-be2d-91e3486ac03d" Apr 30 03:29:30.035671 containerd[1694]: time="2025-04-30T03:29:30.035592023Z" level=info msg="shim disconnected" id=d2f0762c8617978ecb5657937fc5d9e3bd3023932c0bd774c7b730860bee0eee namespace=k8s.io Apr 30 03:29:30.035671 containerd[1694]: time="2025-04-30T03:29:30.035662924Z" level=warning msg="cleaning up after shim disconnected" id=d2f0762c8617978ecb5657937fc5d9e3bd3023932c0bd774c7b730860bee0eee namespace=k8s.io Apr 30 03:29:30.035671 containerd[1694]: time="2025-04-30T03:29:30.035674724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:31.361456 kubelet[3113]: E0430 03:29:31.360384 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f8qxp" podUID="7ca18dcc-f415-45c0-be2d-91e3486ac03d" Apr 30 03:29:32.155786 containerd[1694]: time="2025-04-30T03:29:32.155636043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:32.157870 containerd[1694]: time="2025-04-30T03:29:32.157669972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:29:32.162331 containerd[1694]: time="2025-04-30T03:29:32.162271637Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:32.165786 containerd[1694]: time="2025-04-30T03:29:32.165737086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:32.166806 containerd[1694]: time="2025-04-30T03:29:32.166351395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.000321585s" Apr 30 03:29:32.166806 containerd[1694]: time="2025-04-30T03:29:32.166388595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:29:32.168234 containerd[1694]: time="2025-04-30T03:29:32.168145320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:29:32.183417 containerd[1694]: time="2025-04-30T03:29:32.183384336Z" level=info msg="CreateContainer within sandbox \"e799b152739c693e95c613ebbccf5fc5dea5670842a8a205d8b3ab02ba5d2f76\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:29:32.227294 containerd[1694]: time="2025-04-30T03:29:32.227252957Z" level=info msg="CreateContainer within sandbox \"e799b152739c693e95c613ebbccf5fc5dea5670842a8a205d8b3ab02ba5d2f76\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4d28d8eb9ced0f3898c0ab630bfa045f43b20312ba2295a85cb1d29958a820c7\"" Apr 30 03:29:32.227979 containerd[1694]: time="2025-04-30T03:29:32.227950867Z" level=info msg="StartContainer for \"4d28d8eb9ced0f3898c0ab630bfa045f43b20312ba2295a85cb1d29958a820c7\"" Apr 30 03:29:32.261360 systemd[1]: Started cri-containerd-4d28d8eb9ced0f3898c0ab630bfa045f43b20312ba2295a85cb1d29958a820c7.scope - libcontainer container 4d28d8eb9ced0f3898c0ab630bfa045f43b20312ba2295a85cb1d29958a820c7. Apr 30 03:29:32.305939 containerd[1694]: time="2025-04-30T03:29:32.305810470Z" level=info msg="StartContainer for \"4d28d8eb9ced0f3898c0ab630bfa045f43b20312ba2295a85cb1d29958a820c7\" returns successfully" Apr 30 03:29:32.453406 kubelet[3113]: I0430 03:29:32.453140 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-599578f945-lfhcw" podStartSLOduration=1.062188979 podStartE2EDuration="5.453114655s" podCreationTimestamp="2025-04-30 03:29:27 +0000 UTC" firstStartedPulling="2025-04-30 03:29:27.776405433 +0000 UTC m=+13.521673688" lastFinishedPulling="2025-04-30 03:29:32.167331109 +0000 UTC m=+17.912599364" observedRunningTime="2025-04-30 03:29:32.452518747 +0000 UTC m=+18.197787002" watchObservedRunningTime="2025-04-30 03:29:32.453114655 +0000 UTC m=+18.198383010" Apr 30 03:29:33.360332 kubelet[3113]: E0430 03:29:33.360270 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f8qxp" podUID="7ca18dcc-f415-45c0-be2d-91e3486ac03d" Apr 30 03:29:35.360158 kubelet[3113]: E0430 03:29:35.360075 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f8qxp" podUID="7ca18dcc-f415-45c0-be2d-91e3486ac03d" Apr 30 03:29:37.360076 kubelet[3113]: E0430 03:29:37.360028 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f8qxp" podUID="7ca18dcc-f415-45c0-be2d-91e3486ac03d" Apr 30 03:29:37.690215 containerd[1694]: time="2025-04-30T03:29:37.689884395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:37.691657 containerd[1694]: time="2025-04-30T03:29:37.691598820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:29:37.695217 containerd[1694]: time="2025-04-30T03:29:37.694753465Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:37.700637 containerd[1694]: time="2025-04-30T03:29:37.700603148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:37.701469 containerd[1694]: time="2025-04-30T03:29:37.701332658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.532932834s" Apr 30 03:29:37.701469 containerd[1694]: time="2025-04-30T03:29:37.701368959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:29:37.704120 containerd[1694]: time="2025-04-30T03:29:37.703996696Z" level=info msg="CreateContainer within sandbox \"70fb514047dfc5ac67f51f580640da848f2de762f718155b4b5b5d6976d51182\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:29:37.736891 containerd[1694]: time="2025-04-30T03:29:37.736846064Z" level=info msg="CreateContainer within sandbox \"70fb514047dfc5ac67f51f580640da848f2de762f718155b4b5b5d6976d51182\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0435e08e9feceaca2754bbbed410273eab22f2d785c116b7872bdc7b5a9e721f\"" Apr 30 03:29:37.737464 containerd[1694]: time="2025-04-30T03:29:37.737316271Z" level=info msg="StartContainer for \"0435e08e9feceaca2754bbbed410273eab22f2d785c116b7872bdc7b5a9e721f\"" Apr 30 03:29:37.774374 systemd[1]: Started cri-containerd-0435e08e9feceaca2754bbbed410273eab22f2d785c116b7872bdc7b5a9e721f.scope - libcontainer container 0435e08e9feceaca2754bbbed410273eab22f2d785c116b7872bdc7b5a9e721f. Apr 30 03:29:37.801485 containerd[1694]: time="2025-04-30T03:29:37.801442583Z" level=info msg="StartContainer for \"0435e08e9feceaca2754bbbed410273eab22f2d785c116b7872bdc7b5a9e721f\" returns successfully" Apr 30 03:29:39.192784 systemd[1]: cri-containerd-0435e08e9feceaca2754bbbed410273eab22f2d785c116b7872bdc7b5a9e721f.scope: Deactivated successfully. Apr 30 03:29:39.214749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0435e08e9feceaca2754bbbed410273eab22f2d785c116b7872bdc7b5a9e721f-rootfs.mount: Deactivated successfully. Apr 30 03:29:39.278477 kubelet[3113]: I0430 03:29:39.278443 3113 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Apr 30 03:29:39.773229 kubelet[3113]: I0430 03:29:39.377742 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87eb0667-e570-47d8-8451-b9ca008ab0dd-config-volume\") pod \"coredns-6f6b679f8f-8kndr\" (UID: \"87eb0667-e570-47d8-8451-b9ca008ab0dd\") " pod="kube-system/coredns-6f6b679f8f-8kndr" Apr 30 03:29:39.773229 kubelet[3113]: I0430 03:29:39.378778 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a-config-volume\") pod \"coredns-6f6b679f8f-bkvhq\" (UID: \"6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a\") " pod="kube-system/coredns-6f6b679f8f-bkvhq" Apr 30 03:29:39.773229 kubelet[3113]: I0430 03:29:39.378823 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlt7g\" (UniqueName: \"kubernetes.io/projected/6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a-kube-api-access-nlt7g\") pod \"coredns-6f6b679f8f-bkvhq\" (UID: \"6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a\") " pod="kube-system/coredns-6f6b679f8f-bkvhq" Apr 30 03:29:39.773229 kubelet[3113]: I0430 03:29:39.378857 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbs6b\" (UniqueName: \"kubernetes.io/projected/87eb0667-e570-47d8-8451-b9ca008ab0dd-kube-api-access-jbs6b\") pod \"coredns-6f6b679f8f-8kndr\" (UID: \"87eb0667-e570-47d8-8451-b9ca008ab0dd\") " pod="kube-system/coredns-6f6b679f8f-8kndr" Apr 30 03:29:39.773229 kubelet[3113]: I0430 03:29:39.479140 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0becbef8-61a7-43c8-8023-05b47f7451f0-tigera-ca-bundle\") pod \"calico-kube-controllers-5f68864dbc-cw5fx\" (UID: \"0becbef8-61a7-43c8-8023-05b47f7451f0\") " pod="calico-system/calico-kube-controllers-5f68864dbc-cw5fx" Apr 30 03:29:39.334256 systemd[1]: Created slice kubepods-burstable-pod6c056dcd_a6ce_4fc1_ac1b_afb8a60d893a.slice - libcontainer container kubepods-burstable-pod6c056dcd_a6ce_4fc1_ac1b_afb8a60d893a.slice. Apr 30 03:29:39.773650 kubelet[3113]: I0430 03:29:39.479253 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crj4l\" (UniqueName: \"kubernetes.io/projected/0becbef8-61a7-43c8-8023-05b47f7451f0-kube-api-access-crj4l\") pod \"calico-kube-controllers-5f68864dbc-cw5fx\" (UID: \"0becbef8-61a7-43c8-8023-05b47f7451f0\") " pod="calico-system/calico-kube-controllers-5f68864dbc-cw5fx" Apr 30 03:29:39.773650 kubelet[3113]: I0430 03:29:39.479282 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bfsd\" (UniqueName: \"kubernetes.io/projected/e953b696-38c7-4e44-a006-527e804faa59-kube-api-access-8bfsd\") pod \"calico-apiserver-7f7c5c4897-9tqhl\" (UID: \"e953b696-38c7-4e44-a006-527e804faa59\") " pod="calico-apiserver/calico-apiserver-7f7c5c4897-9tqhl" Apr 30 03:29:39.773650 kubelet[3113]: I0430 03:29:39.479306 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a-calico-apiserver-certs\") pod \"calico-apiserver-7f7c5c4897-5rfmb\" (UID: \"eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a\") " pod="calico-apiserver/calico-apiserver-7f7c5c4897-5rfmb" Apr 30 03:29:39.773650 kubelet[3113]: I0430 03:29:39.479348 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4rpf\" (UniqueName: \"kubernetes.io/projected/eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a-kube-api-access-w4rpf\") pod \"calico-apiserver-7f7c5c4897-5rfmb\" (UID: \"eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a\") " pod="calico-apiserver/calico-apiserver-7f7c5c4897-5rfmb" Apr 30 03:29:39.773650 kubelet[3113]: I0430 03:29:39.479377 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e953b696-38c7-4e44-a006-527e804faa59-calico-apiserver-certs\") pod \"calico-apiserver-7f7c5c4897-9tqhl\" (UID: \"e953b696-38c7-4e44-a006-527e804faa59\") " pod="calico-apiserver/calico-apiserver-7f7c5c4897-9tqhl" Apr 30 03:29:39.359736 systemd[1]: Created slice kubepods-burstable-pod87eb0667_e570_47d8_8451_b9ca008ab0dd.slice - libcontainer container kubepods-burstable-pod87eb0667_e570_47d8_8451_b9ca008ab0dd.slice. Apr 30 03:29:39.366504 systemd[1]: Created slice kubepods-besteffort-pode953b696_38c7_4e44_a006_527e804faa59.slice - libcontainer container kubepods-besteffort-pode953b696_38c7_4e44_a006_527e804faa59.slice. Apr 30 03:29:39.377283 systemd[1]: Created slice kubepods-besteffort-podeb6ecfc8_c4f5_4d10_b5a5_3ef2fe85ec1a.slice - libcontainer container kubepods-besteffort-podeb6ecfc8_c4f5_4d10_b5a5_3ef2fe85ec1a.slice. Apr 30 03:29:39.384351 systemd[1]: Created slice kubepods-besteffort-pod0becbef8_61a7_43c8_8023_05b47f7451f0.slice - libcontainer container kubepods-besteffort-pod0becbef8_61a7_43c8_8023_05b47f7451f0.slice. Apr 30 03:29:39.399378 systemd[1]: Created slice kubepods-besteffort-pod7ca18dcc_f415_45c0_be2d_91e3486ac03d.slice - libcontainer container kubepods-besteffort-pod7ca18dcc_f415_45c0_be2d_91e3486ac03d.slice. Apr 30 03:29:39.779317 containerd[1694]: time="2025-04-30T03:29:39.774754473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f8qxp,Uid:7ca18dcc-f415-45c0-be2d-91e3486ac03d,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:40.071970 containerd[1694]: time="2025-04-30T03:29:40.071844601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bkvhq,Uid:6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:40.083877 containerd[1694]: time="2025-04-30T03:29:40.083818072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f68864dbc-cw5fx,Uid:0becbef8-61a7-43c8-8023-05b47f7451f0,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:40.084234 containerd[1694]: time="2025-04-30T03:29:40.083818272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c5c4897-5rfmb,Uid:eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:40.088921 containerd[1694]: time="2025-04-30T03:29:40.088870844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c5c4897-9tqhl,Uid:e953b696-38c7-4e44-a006-527e804faa59,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:40.089255 containerd[1694]: time="2025-04-30T03:29:40.089001846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8kndr,Uid:87eb0667-e570-47d8-8451-b9ca008ab0dd,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:41.478955 containerd[1694]: time="2025-04-30T03:29:41.478893730Z" level=info msg="shim disconnected" id=0435e08e9feceaca2754bbbed410273eab22f2d785c116b7872bdc7b5a9e721f namespace=k8s.io Apr 30 03:29:41.479763 containerd[1694]: time="2025-04-30T03:29:41.479459038Z" level=warning msg="cleaning up after shim disconnected" id=0435e08e9feceaca2754bbbed410273eab22f2d785c116b7872bdc7b5a9e721f namespace=k8s.io Apr 30 03:29:41.479763 containerd[1694]: time="2025-04-30T03:29:41.479488639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:41.792812 containerd[1694]: time="2025-04-30T03:29:41.792325192Z" level=error msg="Failed to destroy network for sandbox \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.793754 containerd[1694]: time="2025-04-30T03:29:41.793704511Z" level=error msg="encountered an error cleaning up failed sandbox \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.793876 containerd[1694]: time="2025-04-30T03:29:41.793807113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c5c4897-9tqhl,Uid:e953b696-38c7-4e44-a006-527e804faa59,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.794110 kubelet[3113]: E0430 03:29:41.794064 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.795106 kubelet[3113]: E0430 03:29:41.794146 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f7c5c4897-9tqhl" Apr 30 03:29:41.795106 kubelet[3113]: E0430 03:29:41.794172 3113 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f7c5c4897-9tqhl" Apr 30 03:29:41.795106 kubelet[3113]: E0430 03:29:41.794240 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f7c5c4897-9tqhl_calico-apiserver(e953b696-38c7-4e44-a006-527e804faa59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f7c5c4897-9tqhl_calico-apiserver(e953b696-38c7-4e44-a006-527e804faa59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f7c5c4897-9tqhl" podUID="e953b696-38c7-4e44-a006-527e804faa59" Apr 30 03:29:41.806381 containerd[1694]: time="2025-04-30T03:29:41.806311791Z" level=error msg="Failed to destroy network for sandbox \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.807045 containerd[1694]: time="2025-04-30T03:29:41.806860399Z" level=error msg="encountered an error cleaning up failed sandbox \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.807045 containerd[1694]: time="2025-04-30T03:29:41.806936600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f8qxp,Uid:7ca18dcc-f415-45c0-be2d-91e3486ac03d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.807233 kubelet[3113]: E0430 03:29:41.807162 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.807314 kubelet[3113]: E0430 03:29:41.807234 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f8qxp" Apr 30 03:29:41.807314 kubelet[3113]: E0430 03:29:41.807270 3113 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f8qxp" Apr 30 03:29:41.807415 kubelet[3113]: E0430 03:29:41.807319 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f8qxp_calico-system(7ca18dcc-f415-45c0-be2d-91e3486ac03d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f8qxp_calico-system(7ca18dcc-f415-45c0-be2d-91e3486ac03d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f8qxp" podUID="7ca18dcc-f415-45c0-be2d-91e3486ac03d" Apr 30 03:29:41.828212 containerd[1694]: time="2025-04-30T03:29:41.827439092Z" level=error msg="Failed to destroy network for sandbox \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.828571 containerd[1694]: time="2025-04-30T03:29:41.828441606Z" level=error msg="encountered an error cleaning up failed sandbox \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.828673 containerd[1694]: time="2025-04-30T03:29:41.828607608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c5c4897-5rfmb,Uid:eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.828909 kubelet[3113]: E0430 03:29:41.828871 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.829031 kubelet[3113]: E0430 03:29:41.828939 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f7c5c4897-5rfmb" Apr 30 03:29:41.829031 kubelet[3113]: E0430 03:29:41.828968 3113 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f7c5c4897-5rfmb" Apr 30 03:29:41.829134 kubelet[3113]: E0430 03:29:41.829019 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f7c5c4897-5rfmb_calico-apiserver(eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f7c5c4897-5rfmb_calico-apiserver(eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f7c5c4897-5rfmb" podUID="eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a" Apr 30 03:29:41.836056 containerd[1694]: time="2025-04-30T03:29:41.836013314Z" level=error msg="Failed to destroy network for sandbox \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.836454 containerd[1694]: time="2025-04-30T03:29:41.836416919Z" level=error msg="encountered an error cleaning up failed sandbox \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.836534 containerd[1694]: time="2025-04-30T03:29:41.836507521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f68864dbc-cw5fx,Uid:0becbef8-61a7-43c8-8023-05b47f7451f0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.836808 kubelet[3113]: E0430 03:29:41.836769 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.836886 kubelet[3113]: E0430 03:29:41.836842 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f68864dbc-cw5fx" Apr 30 03:29:41.836886 kubelet[3113]: E0430 03:29:41.836870 3113 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f68864dbc-cw5fx" Apr 30 03:29:41.836978 kubelet[3113]: E0430 03:29:41.836928 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f68864dbc-cw5fx_calico-system(0becbef8-61a7-43c8-8023-05b47f7451f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f68864dbc-cw5fx_calico-system(0becbef8-61a7-43c8-8023-05b47f7451f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f68864dbc-cw5fx" podUID="0becbef8-61a7-43c8-8023-05b47f7451f0" Apr 30 03:29:41.842237 containerd[1694]: time="2025-04-30T03:29:41.840884783Z" level=error msg="Failed to destroy network for sandbox \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.842729 containerd[1694]: time="2025-04-30T03:29:41.842691809Z" level=error msg="encountered an error cleaning up failed sandbox \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.842807 containerd[1694]: time="2025-04-30T03:29:41.842773910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bkvhq,Uid:6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.843063 kubelet[3113]: E0430 03:29:41.843030 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.843140 kubelet[3113]: E0430 03:29:41.843100 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bkvhq" Apr 30 03:29:41.843140 kubelet[3113]: E0430 03:29:41.843125 3113 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bkvhq" Apr 30 03:29:41.843269 kubelet[3113]: E0430 03:29:41.843215 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bkvhq_kube-system(6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bkvhq_kube-system(6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bkvhq" podUID="6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a" Apr 30 03:29:41.844059 containerd[1694]: time="2025-04-30T03:29:41.844030728Z" level=error msg="Failed to destroy network for sandbox \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.844448 containerd[1694]: time="2025-04-30T03:29:41.844416233Z" level=error msg="encountered an error cleaning up failed sandbox \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.844593 containerd[1694]: time="2025-04-30T03:29:41.844563735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8kndr,Uid:87eb0667-e570-47d8-8451-b9ca008ab0dd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.844867 kubelet[3113]: E0430 03:29:41.844832 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:41.844944 kubelet[3113]: E0430 03:29:41.844881 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8kndr" Apr 30 03:29:41.844944 kubelet[3113]: E0430 03:29:41.844907 3113 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8kndr" Apr 30 03:29:41.845028 kubelet[3113]: E0430 03:29:41.844946 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8kndr_kube-system(87eb0667-e570-47d8-8451-b9ca008ab0dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8kndr_kube-system(87eb0667-e570-47d8-8451-b9ca008ab0dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8kndr" podUID="87eb0667-e570-47d8-8451-b9ca008ab0dd" Apr 30 03:29:42.468468 kubelet[3113]: I0430 03:29:42.468429 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:29:42.469909 containerd[1694]: time="2025-04-30T03:29:42.469312933Z" level=info msg="StopPodSandbox for \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\"" Apr 30 03:29:42.469909 containerd[1694]: time="2025-04-30T03:29:42.469556237Z" level=info msg="Ensure that sandbox af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad in task-service has been cleanup successfully" Apr 30 03:29:42.477546 containerd[1694]: time="2025-04-30T03:29:42.477506250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:29:42.477921 kubelet[3113]: I0430 03:29:42.477880 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:29:42.481888 containerd[1694]: time="2025-04-30T03:29:42.480589894Z" level=info msg="StopPodSandbox for \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\"" Apr 30 03:29:42.482942 containerd[1694]: time="2025-04-30T03:29:42.482424820Z" level=info msg="Ensure that sandbox 993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627 in task-service has been cleanup successfully" Apr 30 03:29:42.483740 kubelet[3113]: I0430 03:29:42.483716 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:29:42.484723 containerd[1694]: time="2025-04-30T03:29:42.484697652Z" level=info msg="StopPodSandbox for \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\"" Apr 30 03:29:42.485392 containerd[1694]: time="2025-04-30T03:29:42.485368362Z" level=info msg="Ensure that sandbox 8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389 in task-service has been cleanup successfully" Apr 30 03:29:42.503496 kubelet[3113]: I0430 03:29:42.503435 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:29:42.505489 containerd[1694]: time="2025-04-30T03:29:42.505456148Z" level=info msg="StopPodSandbox for \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\"" Apr 30 03:29:42.506034 containerd[1694]: time="2025-04-30T03:29:42.505815353Z" level=info msg="Ensure that sandbox 0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc in task-service has been cleanup successfully" Apr 30 03:29:42.510557 kubelet[3113]: I0430 03:29:42.510323 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:29:42.513412 containerd[1694]: time="2025-04-30T03:29:42.512851753Z" level=info msg="StopPodSandbox for \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\"" Apr 30 03:29:42.513412 containerd[1694]: time="2025-04-30T03:29:42.513068056Z" level=info msg="Ensure that sandbox 70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30 in task-service has been cleanup successfully" Apr 30 03:29:42.519205 kubelet[3113]: I0430 03:29:42.519172 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:29:42.525917 containerd[1694]: time="2025-04-30T03:29:42.525883039Z" level=info msg="StopPodSandbox for \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\"" Apr 30 03:29:42.526255 containerd[1694]: time="2025-04-30T03:29:42.526227844Z" level=info msg="Ensure that sandbox 0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360 in task-service has been cleanup successfully" Apr 30 03:29:42.569661 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc-shm.mount: Deactivated successfully. Apr 30 03:29:42.570147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad-shm.mount: Deactivated successfully. Apr 30 03:29:42.570240 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30-shm.mount: Deactivated successfully. Apr 30 03:29:42.570327 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360-shm.mount: Deactivated successfully. Apr 30 03:29:42.613420 containerd[1694]: time="2025-04-30T03:29:42.613350885Z" level=error msg="StopPodSandbox for \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\" failed" error="failed to destroy network for sandbox \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:42.613987 kubelet[3113]: E0430 03:29:42.613613 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:29:42.613987 kubelet[3113]: E0430 03:29:42.613679 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad"} Apr 30 03:29:42.613987 kubelet[3113]: E0430 03:29:42.613764 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:42.613987 kubelet[3113]: E0430 03:29:42.613802 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bkvhq" podUID="6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a" Apr 30 03:29:42.617161 containerd[1694]: time="2025-04-30T03:29:42.617098638Z" level=error msg="StopPodSandbox for \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\" failed" error="failed to destroy network for sandbox \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:42.617759 kubelet[3113]: E0430 03:29:42.617445 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:29:42.617759 kubelet[3113]: E0430 03:29:42.617490 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc"} Apr 30 03:29:42.617759 kubelet[3113]: E0430 03:29:42.617527 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0becbef8-61a7-43c8-8023-05b47f7451f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:42.617759 kubelet[3113]: E0430 03:29:42.617554 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0becbef8-61a7-43c8-8023-05b47f7451f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f68864dbc-cw5fx" podUID="0becbef8-61a7-43c8-8023-05b47f7451f0" Apr 30 03:29:42.630501 containerd[1694]: time="2025-04-30T03:29:42.630328526Z" level=error msg="StopPodSandbox for \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\" failed" error="failed to destroy network for sandbox \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:42.630634 kubelet[3113]: E0430 03:29:42.630597 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:29:42.630700 kubelet[3113]: E0430 03:29:42.630644 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360"} Apr 30 03:29:42.630700 kubelet[3113]: E0430 03:29:42.630687 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e953b696-38c7-4e44-a006-527e804faa59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:42.630817 kubelet[3113]: E0430 03:29:42.630715 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e953b696-38c7-4e44-a006-527e804faa59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f7c5c4897-9tqhl" podUID="e953b696-38c7-4e44-a006-527e804faa59" Apr 30 03:29:42.640790 containerd[1694]: time="2025-04-30T03:29:42.640411370Z" level=error msg="StopPodSandbox for \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\" failed" error="failed to destroy network for sandbox \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:42.641222 kubelet[3113]: E0430 03:29:42.640941 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:29:42.641222 kubelet[3113]: E0430 03:29:42.641096 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627"} Apr 30 03:29:42.641375 containerd[1694]: time="2025-04-30T03:29:42.641063779Z" level=error msg="StopPodSandbox for \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\" failed" error="failed to destroy network for sandbox \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:42.641434 kubelet[3113]: E0430 03:29:42.641357 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:29:42.641434 kubelet[3113]: E0430 03:29:42.641389 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389"} Apr 30 03:29:42.641434 kubelet[3113]: E0430 03:29:42.641422 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:42.641585 kubelet[3113]: E0430 03:29:42.641448 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f7c5c4897-5rfmb" podUID="eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a" Apr 30 03:29:42.641669 kubelet[3113]: E0430 03:29:42.641142 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87eb0667-e570-47d8-8451-b9ca008ab0dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:42.641763 kubelet[3113]: E0430 03:29:42.641672 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87eb0667-e570-47d8-8451-b9ca008ab0dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8kndr" podUID="87eb0667-e570-47d8-8451-b9ca008ab0dd" Apr 30 03:29:42.643028 containerd[1694]: time="2025-04-30T03:29:42.642952806Z" level=error msg="StopPodSandbox for \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\" failed" error="failed to destroy network for sandbox \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:42.643167 kubelet[3113]: E0430 03:29:42.643136 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:29:42.643263 kubelet[3113]: E0430 03:29:42.643174 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30"} Apr 30 03:29:42.643263 kubelet[3113]: E0430 03:29:42.643254 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ca18dcc-f415-45c0-be2d-91e3486ac03d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:42.643374 kubelet[3113]: E0430 03:29:42.643281 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ca18dcc-f415-45c0-be2d-91e3486ac03d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f8qxp" podUID="7ca18dcc-f415-45c0-be2d-91e3486ac03d" Apr 30 03:29:50.182610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822127293.mount: Deactivated successfully. Apr 30 03:29:50.221849 containerd[1694]: time="2025-04-30T03:29:50.221793635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:50.223706 containerd[1694]: time="2025-04-30T03:29:50.223641361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:29:50.226924 containerd[1694]: time="2025-04-30T03:29:50.226872805Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:50.230523 containerd[1694]: time="2025-04-30T03:29:50.230472354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:50.231238 containerd[1694]: time="2025-04-30T03:29:50.231065062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.753512612s" Apr 30 03:29:50.231238 containerd[1694]: time="2025-04-30T03:29:50.231105463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:29:50.254034 containerd[1694]: time="2025-04-30T03:29:50.253982575Z" level=info msg="CreateContainer within sandbox \"70fb514047dfc5ac67f51f580640da848f2de762f718155b4b5b5d6976d51182\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:29:50.291307 containerd[1694]: time="2025-04-30T03:29:50.291261385Z" level=info msg="CreateContainer within sandbox \"70fb514047dfc5ac67f51f580640da848f2de762f718155b4b5b5d6976d51182\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"24eb1615c9c1237ff80d82c6c938f59f9f39a93b2c7140b687ad7b07efc91afe\"" Apr 30 03:29:50.293582 containerd[1694]: time="2025-04-30T03:29:50.291897893Z" level=info msg="StartContainer for \"24eb1615c9c1237ff80d82c6c938f59f9f39a93b2c7140b687ad7b07efc91afe\"" Apr 30 03:29:50.321383 systemd[1]: Started cri-containerd-24eb1615c9c1237ff80d82c6c938f59f9f39a93b2c7140b687ad7b07efc91afe.scope - libcontainer container 24eb1615c9c1237ff80d82c6c938f59f9f39a93b2c7140b687ad7b07efc91afe. Apr 30 03:29:50.354323 containerd[1694]: time="2025-04-30T03:29:50.354155544Z" level=info msg="StartContainer for \"24eb1615c9c1237ff80d82c6c938f59f9f39a93b2c7140b687ad7b07efc91afe\" returns successfully" Apr 30 03:29:50.619255 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:29:50.619404 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:29:52.239224 kernel: bpftool[4457]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:29:52.489883 systemd-networkd[1331]: vxlan.calico: Link UP Apr 30 03:29:52.489894 systemd-networkd[1331]: vxlan.calico: Gained carrier Apr 30 03:29:53.362723 containerd[1694]: time="2025-04-30T03:29:53.361333335Z" level=info msg="StopPodSandbox for \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\"" Apr 30 03:29:53.411016 kubelet[3113]: I0430 03:29:53.410452 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p4h8j" podStartSLOduration=3.940498464 podStartE2EDuration="26.410426006s" podCreationTimestamp="2025-04-30 03:29:27 +0000 UTC" firstStartedPulling="2025-04-30 03:29:27.767425906 +0000 UTC m=+13.512694261" lastFinishedPulling="2025-04-30 03:29:50.237353548 +0000 UTC m=+35.982621803" observedRunningTime="2025-04-30 03:29:50.580037631 +0000 UTC m=+36.325305986" watchObservedRunningTime="2025-04-30 03:29:53.410426006 +0000 UTC m=+39.155694261" Apr 30 03:29:53.442143 containerd[1694]: 2025-04-30 03:29:53.408 [INFO][4540] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:29:53.442143 containerd[1694]: 2025-04-30 03:29:53.408 [INFO][4540] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" iface="eth0" netns="/var/run/netns/cni-31053888-4785-795c-856d-6ae28ef1e76f" Apr 30 03:29:53.442143 containerd[1694]: 2025-04-30 03:29:53.409 [INFO][4540] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" iface="eth0" netns="/var/run/netns/cni-31053888-4785-795c-856d-6ae28ef1e76f" Apr 30 03:29:53.442143 containerd[1694]: 2025-04-30 03:29:53.410 [INFO][4540] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" iface="eth0" netns="/var/run/netns/cni-31053888-4785-795c-856d-6ae28ef1e76f" Apr 30 03:29:53.442143 containerd[1694]: 2025-04-30 03:29:53.410 [INFO][4540] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:29:53.442143 containerd[1694]: 2025-04-30 03:29:53.410 [INFO][4540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:29:53.442143 containerd[1694]: 2025-04-30 03:29:53.430 [INFO][4547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" HandleID="k8s-pod-network.70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Workload="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:29:53.442143 containerd[1694]: 2025-04-30 03:29:53.430 [INFO][4547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:53.442143 containerd[1694]: 2025-04-30 03:29:53.430 [INFO][4547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:53.442143 containerd[1694]: 2025-04-30 03:29:53.437 [WARNING][4547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" HandleID="k8s-pod-network.70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Workload="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:29:53.442143 containerd[1694]: 2025-04-30 03:29:53.437 [INFO][4547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" HandleID="k8s-pod-network.70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Workload="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:29:53.442143 containerd[1694]: 2025-04-30 03:29:53.438 [INFO][4547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:53.442143 containerd[1694]: 2025-04-30 03:29:53.441 [INFO][4540] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:29:53.444369 containerd[1694]: time="2025-04-30T03:29:53.444328969Z" level=info msg="TearDown network for sandbox \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\" successfully" Apr 30 03:29:53.444369 containerd[1694]: time="2025-04-30T03:29:53.444369170Z" level=info msg="StopPodSandbox for \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\" returns successfully" Apr 30 03:29:53.446536 systemd[1]: run-netns-cni\x2d31053888\x2d4785\x2d795c\x2d856d\x2d6ae28ef1e76f.mount: Deactivated successfully. Apr 30 03:29:53.447195 containerd[1694]: time="2025-04-30T03:29:53.447154008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f8qxp,Uid:7ca18dcc-f415-45c0-be2d-91e3486ac03d,Namespace:calico-system,Attempt:1,}" Apr 30 03:29:53.582723 systemd-networkd[1331]: caliaae0766ac59: Link UP Apr 30 03:29:53.582981 systemd-networkd[1331]: caliaae0766ac59: Gained carrier Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.513 [INFO][4556] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0 csi-node-driver- calico-system 7ca18dcc-f415-45c0-be2d-91e3486ac03d 774 0 2025-04-30 03:29:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-a-afe39379c7 csi-node-driver-f8qxp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaae0766ac59 [] []}} ContainerID="c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" Namespace="calico-system" Pod="csi-node-driver-f8qxp" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-" Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.513 [INFO][4556] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" Namespace="calico-system" Pod="csi-node-driver-f8qxp" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.539 [INFO][4566] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" HandleID="k8s-pod-network.c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" Workload="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.548 [INFO][4566] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" HandleID="k8s-pod-network.c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" Workload="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290a90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-afe39379c7", "pod":"csi-node-driver-f8qxp", "timestamp":"2025-04-30 03:29:53.539027763 +0000 UTC"}, Hostname:"ci-4081.3.3-a-afe39379c7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.548 [INFO][4566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.548 [INFO][4566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.548 [INFO][4566] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-afe39379c7' Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.550 [INFO][4566] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.553 [INFO][4566] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.558 [INFO][4566] ipam/ipam.go 489: Trying affinity for 192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.559 [INFO][4566] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.561 [INFO][4566] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.561 [INFO][4566] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.562 [INFO][4566] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045 Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.566 [INFO][4566] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.575 [INFO][4566] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.1/26] block=192.168.79.0/26 handle="k8s-pod-network.c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.575 [INFO][4566] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.1/26] handle="k8s-pod-network.c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.575 [INFO][4566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:53.603132 containerd[1694]: 2025-04-30 03:29:53.575 [INFO][4566] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.1/26] IPv6=[] ContainerID="c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" HandleID="k8s-pod-network.c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" Workload="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:29:53.604027 containerd[1694]: 2025-04-30 03:29:53.577 [INFO][4556] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" Namespace="calico-system" Pod="csi-node-driver-f8qxp" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ca18dcc-f415-45c0-be2d-91e3486ac03d", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"", Pod:"csi-node-driver-f8qxp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaae0766ac59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:53.604027 containerd[1694]: 2025-04-30 03:29:53.577 [INFO][4556] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.1/32] ContainerID="c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" Namespace="calico-system" Pod="csi-node-driver-f8qxp" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:29:53.604027 containerd[1694]: 2025-04-30 03:29:53.577 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaae0766ac59 ContainerID="c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" Namespace="calico-system" Pod="csi-node-driver-f8qxp" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:29:53.604027 containerd[1694]: 2025-04-30 03:29:53.583 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" Namespace="calico-system" Pod="csi-node-driver-f8qxp" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:29:53.604027 containerd[1694]: 2025-04-30 03:29:53.583 [INFO][4556] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" Namespace="calico-system" Pod="csi-node-driver-f8qxp" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ca18dcc-f415-45c0-be2d-91e3486ac03d", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045", Pod:"csi-node-driver-f8qxp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaae0766ac59", MAC:"e6:57:1a:83:72:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:53.604027 containerd[1694]: 2025-04-30 03:29:53.601 [INFO][4556] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045" Namespace="calico-system" Pod="csi-node-driver-f8qxp" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:29:53.630798 containerd[1694]: time="2025-04-30T03:29:53.630396812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:53.630798 containerd[1694]: time="2025-04-30T03:29:53.630503613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:53.630798 containerd[1694]: time="2025-04-30T03:29:53.630524913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:53.630798 containerd[1694]: time="2025-04-30T03:29:53.630641415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:53.664366 systemd[1]: Started cri-containerd-c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045.scope - libcontainer container c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045. Apr 30 03:29:53.686590 containerd[1694]: time="2025-04-30T03:29:53.686398177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f8qxp,Uid:7ca18dcc-f415-45c0-be2d-91e3486ac03d,Namespace:calico-system,Attempt:1,} returns sandbox id \"c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045\"" Apr 30 03:29:53.688784 containerd[1694]: time="2025-04-30T03:29:53.688699608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:29:54.238414 systemd-networkd[1331]: vxlan.calico: Gained IPv6LL Apr 30 03:29:54.363972 containerd[1694]: time="2025-04-30T03:29:54.363827033Z" level=info msg="StopPodSandbox for \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\"" Apr 30 03:29:54.445772 systemd[1]: run-containerd-runc-k8s.io-c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045-runc.euCedQ.mount: Deactivated successfully. Apr 30 03:29:54.455434 containerd[1694]: 2025-04-30 03:29:54.412 [INFO][4639] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:29:54.455434 containerd[1694]: 2025-04-30 03:29:54.412 [INFO][4639] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" iface="eth0" netns="/var/run/netns/cni-d25b68dc-219f-c9bd-7de6-92fe58cc2bc3" Apr 30 03:29:54.455434 containerd[1694]: 2025-04-30 03:29:54.413 [INFO][4639] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" iface="eth0" netns="/var/run/netns/cni-d25b68dc-219f-c9bd-7de6-92fe58cc2bc3" Apr 30 03:29:54.455434 containerd[1694]: 2025-04-30 03:29:54.413 [INFO][4639] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" iface="eth0" netns="/var/run/netns/cni-d25b68dc-219f-c9bd-7de6-92fe58cc2bc3" Apr 30 03:29:54.455434 containerd[1694]: 2025-04-30 03:29:54.414 [INFO][4639] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:29:54.455434 containerd[1694]: 2025-04-30 03:29:54.414 [INFO][4639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:29:54.455434 containerd[1694]: 2025-04-30 03:29:54.440 [INFO][4646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" HandleID="k8s-pod-network.0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:29:54.455434 containerd[1694]: 2025-04-30 03:29:54.440 [INFO][4646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:54.455434 containerd[1694]: 2025-04-30 03:29:54.440 [INFO][4646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:54.455434 containerd[1694]: 2025-04-30 03:29:54.451 [WARNING][4646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" HandleID="k8s-pod-network.0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:29:54.455434 containerd[1694]: 2025-04-30 03:29:54.451 [INFO][4646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" HandleID="k8s-pod-network.0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:29:54.455434 containerd[1694]: 2025-04-30 03:29:54.453 [INFO][4646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:54.455434 containerd[1694]: 2025-04-30 03:29:54.454 [INFO][4639] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:29:54.457875 containerd[1694]: time="2025-04-30T03:29:54.457255210Z" level=info msg="TearDown network for sandbox \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\" successfully" Apr 30 03:29:54.457875 containerd[1694]: time="2025-04-30T03:29:54.457292511Z" level=info msg="StopPodSandbox for \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\" returns successfully" Apr 30 03:29:54.458037 containerd[1694]: time="2025-04-30T03:29:54.458006420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c5c4897-9tqhl,Uid:e953b696-38c7-4e44-a006-527e804faa59,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:29:54.460839 systemd[1]: run-netns-cni\x2dd25b68dc\x2d219f\x2dc9bd\x2d7de6\x2d92fe58cc2bc3.mount: Deactivated successfully. Apr 30 03:29:54.624103 systemd-networkd[1331]: califae6339731d: Link UP Apr 30 03:29:54.624376 systemd-networkd[1331]: califae6339731d: Gained carrier Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.555 [INFO][4653] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0 calico-apiserver-7f7c5c4897- calico-apiserver e953b696-38c7-4e44-a006-527e804faa59 782 0 2025-04-30 03:29:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f7c5c4897 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-afe39379c7 calico-apiserver-7f7c5c4897-9tqhl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califae6339731d [] []}} ContainerID="051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-9tqhl" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-" Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.555 [INFO][4653] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-9tqhl" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.584 [INFO][4665] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" HandleID="k8s-pod-network.051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.594 [INFO][4665] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" HandleID="k8s-pod-network.051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-afe39379c7", "pod":"calico-apiserver-7f7c5c4897-9tqhl", "timestamp":"2025-04-30 03:29:54.584763052 +0000 UTC"}, Hostname:"ci-4081.3.3-a-afe39379c7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.594 [INFO][4665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.594 [INFO][4665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.594 [INFO][4665] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-afe39379c7' Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.596 [INFO][4665] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.599 [INFO][4665] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.602 [INFO][4665] ipam/ipam.go 489: Trying affinity for 192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.603 [INFO][4665] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.605 [INFO][4665] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.605 [INFO][4665] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.606 [INFO][4665] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392 Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.610 [INFO][4665] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.618 [INFO][4665] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.2/26] block=192.168.79.0/26 handle="k8s-pod-network.051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.618 [INFO][4665] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.2/26] handle="k8s-pod-network.051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.618 [INFO][4665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:54.641130 containerd[1694]: 2025-04-30 03:29:54.618 [INFO][4665] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.2/26] IPv6=[] ContainerID="051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" HandleID="k8s-pod-network.051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:29:54.642065 containerd[1694]: 2025-04-30 03:29:54.620 [INFO][4653] cni-plugin/k8s.go 386: Populated endpoint ContainerID="051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-9tqhl" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0", GenerateName:"calico-apiserver-7f7c5c4897-", Namespace:"calico-apiserver", SelfLink:"", UID:"e953b696-38c7-4e44-a006-527e804faa59", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c5c4897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"", Pod:"calico-apiserver-7f7c5c4897-9tqhl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califae6339731d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:54.642065 containerd[1694]: 2025-04-30 03:29:54.620 [INFO][4653] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.2/32] ContainerID="051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-9tqhl" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:29:54.642065 containerd[1694]: 2025-04-30 03:29:54.620 [INFO][4653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califae6339731d ContainerID="051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-9tqhl" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:29:54.642065 containerd[1694]: 2025-04-30 03:29:54.622 [INFO][4653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-9tqhl" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:29:54.642065 containerd[1694]: 2025-04-30 03:29:54.622 [INFO][4653] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-9tqhl" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0", GenerateName:"calico-apiserver-7f7c5c4897-", Namespace:"calico-apiserver", SelfLink:"", UID:"e953b696-38c7-4e44-a006-527e804faa59", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c5c4897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392", Pod:"calico-apiserver-7f7c5c4897-9tqhl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califae6339731d", MAC:"ba:73:b5:fa:30:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:54.642065 containerd[1694]: 2025-04-30 03:29:54.637 [INFO][4653] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-9tqhl" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:29:54.670804 containerd[1694]: time="2025-04-30T03:29:54.670626126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:54.670804 containerd[1694]: time="2025-04-30T03:29:54.670707527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:54.671266 containerd[1694]: time="2025-04-30T03:29:54.670821228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:54.671266 containerd[1694]: time="2025-04-30T03:29:54.671031931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:54.698335 systemd[1]: Started cri-containerd-051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392.scope - libcontainer container 051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392. Apr 30 03:29:54.738351 containerd[1694]: time="2025-04-30T03:29:54.738308850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c5c4897-9tqhl,Uid:e953b696-38c7-4e44-a006-527e804faa59,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392\"" Apr 30 03:29:55.262370 systemd-networkd[1331]: caliaae0766ac59: Gained IPv6LL Apr 30 03:29:55.272468 containerd[1694]: time="2025-04-30T03:29:55.272420449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:55.274268 containerd[1694]: time="2025-04-30T03:29:55.274203173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:29:55.277401 containerd[1694]: time="2025-04-30T03:29:55.277347616Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:55.282124 containerd[1694]: time="2025-04-30T03:29:55.282072481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:55.282866 containerd[1694]: time="2025-04-30T03:29:55.282830491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.594084082s" Apr 30 03:29:55.282959 containerd[1694]: time="2025-04-30T03:29:55.282864391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:29:55.285864 containerd[1694]: time="2025-04-30T03:29:55.283993907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:29:55.285864 containerd[1694]: time="2025-04-30T03:29:55.285219424Z" level=info msg="CreateContainer within sandbox \"c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:29:55.325109 containerd[1694]: time="2025-04-30T03:29:55.325061468Z" level=info msg="CreateContainer within sandbox \"c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"89a87a46ea6f95f6762d21d650e4570cc79783940c248c424f63d8fd1b5d0915\"" Apr 30 03:29:55.325841 containerd[1694]: time="2025-04-30T03:29:55.325779978Z" level=info msg="StartContainer for \"89a87a46ea6f95f6762d21d650e4570cc79783940c248c424f63d8fd1b5d0915\"" Apr 30 03:29:55.354341 systemd[1]: Started cri-containerd-89a87a46ea6f95f6762d21d650e4570cc79783940c248c424f63d8fd1b5d0915.scope - libcontainer container 89a87a46ea6f95f6762d21d650e4570cc79783940c248c424f63d8fd1b5d0915. Apr 30 03:29:55.382965 containerd[1694]: time="2025-04-30T03:29:55.382923559Z" level=info msg="StartContainer for \"89a87a46ea6f95f6762d21d650e4570cc79783940c248c424f63d8fd1b5d0915\" returns successfully" Apr 30 03:29:56.030400 systemd-networkd[1331]: califae6339731d: Gained IPv6LL Apr 30 03:29:56.362215 containerd[1694]: time="2025-04-30T03:29:56.361746734Z" level=info msg="StopPodSandbox for \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\"" Apr 30 03:29:56.363217 containerd[1694]: time="2025-04-30T03:29:56.362842949Z" level=info msg="StopPodSandbox for \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\"" Apr 30 03:29:56.475289 containerd[1694]: 2025-04-30 03:29:56.434 [INFO][4791] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:29:56.475289 containerd[1694]: 2025-04-30 03:29:56.434 [INFO][4791] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" iface="eth0" netns="/var/run/netns/cni-2eb3b8e5-7346-6466-2447-dc6bc8f0feca" Apr 30 03:29:56.475289 containerd[1694]: 2025-04-30 03:29:56.435 [INFO][4791] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" iface="eth0" netns="/var/run/netns/cni-2eb3b8e5-7346-6466-2447-dc6bc8f0feca" Apr 30 03:29:56.475289 containerd[1694]: 2025-04-30 03:29:56.435 [INFO][4791] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" iface="eth0" netns="/var/run/netns/cni-2eb3b8e5-7346-6466-2447-dc6bc8f0feca" Apr 30 03:29:56.475289 containerd[1694]: 2025-04-30 03:29:56.435 [INFO][4791] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:29:56.475289 containerd[1694]: 2025-04-30 03:29:56.436 [INFO][4791] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:29:56.475289 containerd[1694]: 2025-04-30 03:29:56.464 [INFO][4806] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" HandleID="k8s-pod-network.993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:29:56.475289 containerd[1694]: 2025-04-30 03:29:56.464 [INFO][4806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:56.475289 containerd[1694]: 2025-04-30 03:29:56.464 [INFO][4806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:56.475289 containerd[1694]: 2025-04-30 03:29:56.471 [WARNING][4806] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" HandleID="k8s-pod-network.993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:29:56.475289 containerd[1694]: 2025-04-30 03:29:56.471 [INFO][4806] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" HandleID="k8s-pod-network.993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:29:56.475289 containerd[1694]: 2025-04-30 03:29:56.472 [INFO][4806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:56.475289 containerd[1694]: 2025-04-30 03:29:56.473 [INFO][4791] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:29:56.477641 containerd[1694]: time="2025-04-30T03:29:56.476436301Z" level=info msg="TearDown network for sandbox \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\" successfully" Apr 30 03:29:56.477641 containerd[1694]: time="2025-04-30T03:29:56.476862507Z" level=info msg="StopPodSandbox for \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\" returns successfully" Apr 30 03:29:56.484405 containerd[1694]: time="2025-04-30T03:29:56.483947903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8kndr,Uid:87eb0667-e570-47d8-8451-b9ca008ab0dd,Namespace:kube-system,Attempt:1,}" Apr 30 03:29:56.485542 systemd[1]: run-netns-cni\x2d2eb3b8e5\x2d7346\x2d6466\x2d2447\x2ddc6bc8f0feca.mount: Deactivated successfully. Apr 30 03:29:56.497442 containerd[1694]: 2025-04-30 03:29:56.428 [INFO][4783] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:29:56.497442 containerd[1694]: 2025-04-30 03:29:56.429 [INFO][4783] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" iface="eth0" netns="/var/run/netns/cni-a132611b-5baf-d934-e5f7-5ff7b5cbb76e" Apr 30 03:29:56.497442 containerd[1694]: 2025-04-30 03:29:56.429 [INFO][4783] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" iface="eth0" netns="/var/run/netns/cni-a132611b-5baf-d934-e5f7-5ff7b5cbb76e" Apr 30 03:29:56.497442 containerd[1694]: 2025-04-30 03:29:56.430 [INFO][4783] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" iface="eth0" netns="/var/run/netns/cni-a132611b-5baf-d934-e5f7-5ff7b5cbb76e" Apr 30 03:29:56.497442 containerd[1694]: 2025-04-30 03:29:56.430 [INFO][4783] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:29:56.497442 containerd[1694]: 2025-04-30 03:29:56.430 [INFO][4783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:29:56.497442 containerd[1694]: 2025-04-30 03:29:56.464 [INFO][4801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" HandleID="k8s-pod-network.8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:29:56.497442 containerd[1694]: 2025-04-30 03:29:56.464 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:56.497442 containerd[1694]: 2025-04-30 03:29:56.472 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:56.497442 containerd[1694]: 2025-04-30 03:29:56.490 [WARNING][4801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" HandleID="k8s-pod-network.8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:29:56.497442 containerd[1694]: 2025-04-30 03:29:56.490 [INFO][4801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" HandleID="k8s-pod-network.8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:29:56.497442 containerd[1694]: 2025-04-30 03:29:56.492 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:56.497442 containerd[1694]: 2025-04-30 03:29:56.494 [INFO][4783] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:29:56.497442 containerd[1694]: time="2025-04-30T03:29:56.497535489Z" level=info msg="TearDown network for sandbox \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\" successfully" Apr 30 03:29:56.497442 containerd[1694]: time="2025-04-30T03:29:56.497561889Z" level=info msg="StopPodSandbox for \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\" returns successfully" Apr 30 03:29:56.502622 containerd[1694]: time="2025-04-30T03:29:56.499784020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c5c4897-5rfmb,Uid:eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:29:56.501930 systemd[1]: run-netns-cni\x2da132611b\x2d5baf\x2dd934\x2de5f7\x2d5ff7b5cbb76e.mount: Deactivated successfully. Apr 30 03:29:56.741656 systemd-networkd[1331]: cali744bf05f77f: Link UP Apr 30 03:29:56.743325 systemd-networkd[1331]: cali744bf05f77f: Gained carrier Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.635 [INFO][4814] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0 coredns-6f6b679f8f- kube-system 87eb0667-e570-47d8-8451-b9ca008ab0dd 798 0 2025-04-30 03:29:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-afe39379c7 coredns-6f6b679f8f-8kndr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali744bf05f77f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-8kndr" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-" Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.635 [INFO][4814] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-8kndr" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.683 [INFO][4839] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" HandleID="k8s-pod-network.d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.697 [INFO][4839] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" HandleID="k8s-pod-network.d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-afe39379c7", "pod":"coredns-6f6b679f8f-8kndr", "timestamp":"2025-04-30 03:29:56.683828135 +0000 UTC"}, Hostname:"ci-4081.3.3-a-afe39379c7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.697 [INFO][4839] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.697 [INFO][4839] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.697 [INFO][4839] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-afe39379c7' Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.700 [INFO][4839] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.706 [INFO][4839] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.711 [INFO][4839] ipam/ipam.go 489: Trying affinity for 192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.713 [INFO][4839] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.716 [INFO][4839] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.716 [INFO][4839] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.719 [INFO][4839] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.727 [INFO][4839] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.735 [INFO][4839] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.3/26] block=192.168.79.0/26 handle="k8s-pod-network.d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.735 [INFO][4839] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.3/26] handle="k8s-pod-network.d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.735 [INFO][4839] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:56.762369 containerd[1694]: 2025-04-30 03:29:56.736 [INFO][4839] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.3/26] IPv6=[] ContainerID="d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" HandleID="k8s-pod-network.d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:29:56.763757 containerd[1694]: 2025-04-30 03:29:56.738 [INFO][4814] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-8kndr" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"87eb0667-e570-47d8-8451-b9ca008ab0dd", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"", Pod:"coredns-6f6b679f8f-8kndr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali744bf05f77f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:56.763757 containerd[1694]: 2025-04-30 03:29:56.738 [INFO][4814] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.3/32] ContainerID="d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-8kndr" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:29:56.763757 containerd[1694]: 2025-04-30 03:29:56.738 [INFO][4814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali744bf05f77f ContainerID="d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-8kndr" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:29:56.763757 containerd[1694]: 2025-04-30 03:29:56.740 [INFO][4814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-8kndr" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:29:56.763757 containerd[1694]: 2025-04-30 03:29:56.740 [INFO][4814] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-8kndr" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"87eb0667-e570-47d8-8451-b9ca008ab0dd", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b", Pod:"coredns-6f6b679f8f-8kndr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali744bf05f77f", MAC:"3a:80:91:2f:b4:16", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:56.763757 containerd[1694]: 2025-04-30 03:29:56.759 [INFO][4814] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b" Namespace="kube-system" Pod="coredns-6f6b679f8f-8kndr" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:29:56.797293 containerd[1694]: time="2025-04-30T03:29:56.797130483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:56.797448 containerd[1694]: time="2025-04-30T03:29:56.797317785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:56.797448 containerd[1694]: time="2025-04-30T03:29:56.797362986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:56.797748 containerd[1694]: time="2025-04-30T03:29:56.797681190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:56.833404 systemd[1]: Started cri-containerd-d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b.scope - libcontainer container d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b. Apr 30 03:29:56.859211 systemd-networkd[1331]: calie7e222c13a1: Link UP Apr 30 03:29:56.863548 systemd-networkd[1331]: calie7e222c13a1: Gained carrier Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.635 [INFO][4822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0 calico-apiserver-7f7c5c4897- calico-apiserver eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a 797 0 2025-04-30 03:29:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f7c5c4897 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-afe39379c7 calico-apiserver-7f7c5c4897-5rfmb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie7e222c13a1 [] []}} ContainerID="09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-5rfmb" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-" Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.636 [INFO][4822] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-5rfmb" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.690 [INFO][4844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" HandleID="k8s-pod-network.09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.700 [INFO][4844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" HandleID="k8s-pod-network.09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-afe39379c7", "pod":"calico-apiserver-7f7c5c4897-5rfmb", "timestamp":"2025-04-30 03:29:56.690230222 +0000 UTC"}, Hostname:"ci-4081.3.3-a-afe39379c7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.700 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.735 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.736 [INFO][4844] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-afe39379c7' Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.805 [INFO][4844] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.811 [INFO][4844] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.819 [INFO][4844] ipam/ipam.go 489: Trying affinity for 192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.821 [INFO][4844] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.824 [INFO][4844] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.824 [INFO][4844] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.826 [INFO][4844] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03 Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.832 [INFO][4844] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.844 [INFO][4844] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.4/26] block=192.168.79.0/26 handle="k8s-pod-network.09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.844 [INFO][4844] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.4/26] handle="k8s-pod-network.09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.844 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:56.900800 containerd[1694]: 2025-04-30 03:29:56.844 [INFO][4844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.4/26] IPv6=[] ContainerID="09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" HandleID="k8s-pod-network.09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:29:56.901790 containerd[1694]: 2025-04-30 03:29:56.846 [INFO][4822] cni-plugin/k8s.go 386: Populated endpoint ContainerID="09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-5rfmb" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0", GenerateName:"calico-apiserver-7f7c5c4897-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c5c4897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"", Pod:"calico-apiserver-7f7c5c4897-5rfmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7e222c13a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:56.901790 containerd[1694]: 2025-04-30 03:29:56.846 [INFO][4822] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.4/32] ContainerID="09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-5rfmb" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:29:56.901790 containerd[1694]: 2025-04-30 03:29:56.846 [INFO][4822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7e222c13a1 ContainerID="09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-5rfmb" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:29:56.901790 containerd[1694]: 2025-04-30 03:29:56.868 [INFO][4822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-5rfmb" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:29:56.901790 containerd[1694]: 2025-04-30 03:29:56.869 [INFO][4822] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-5rfmb" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0", GenerateName:"calico-apiserver-7f7c5c4897-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c5c4897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03", Pod:"calico-apiserver-7f7c5c4897-5rfmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7e222c13a1", MAC:"06:13:1b:57:20:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:56.901790 containerd[1694]: 2025-04-30 03:29:56.897 [INFO][4822] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c5c4897-5rfmb" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:29:56.945792 containerd[1694]: time="2025-04-30T03:29:56.944490096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8kndr,Uid:87eb0667-e570-47d8-8451-b9ca008ab0dd,Namespace:kube-system,Attempt:1,} returns sandbox id \"d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b\"" Apr 30 03:29:56.958421 containerd[1694]: time="2025-04-30T03:29:56.958018181Z" level=info msg="CreateContainer within sandbox \"d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:29:56.968735 containerd[1694]: time="2025-04-30T03:29:56.968550225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:56.968735 containerd[1694]: time="2025-04-30T03:29:56.968596126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:56.968735 containerd[1694]: time="2025-04-30T03:29:56.968609426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:56.968735 containerd[1694]: time="2025-04-30T03:29:56.968672927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:56.995390 systemd[1]: Started cri-containerd-09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03.scope - libcontainer container 09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03. Apr 30 03:29:56.996510 containerd[1694]: time="2025-04-30T03:29:56.996401106Z" level=info msg="CreateContainer within sandbox \"d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d7ca243532b33d2f56fab49b1ae2827e3cf0589ecb81f12104fb7515a19bd2c\"" Apr 30 03:29:56.999099 containerd[1694]: time="2025-04-30T03:29:56.999068542Z" level=info msg="StartContainer for \"3d7ca243532b33d2f56fab49b1ae2827e3cf0589ecb81f12104fb7515a19bd2c\"" Apr 30 03:29:57.046484 systemd[1]: Started cri-containerd-3d7ca243532b33d2f56fab49b1ae2827e3cf0589ecb81f12104fb7515a19bd2c.scope - libcontainer container 3d7ca243532b33d2f56fab49b1ae2827e3cf0589ecb81f12104fb7515a19bd2c. Apr 30 03:29:57.097028 containerd[1694]: time="2025-04-30T03:29:57.096438073Z" level=info msg="StartContainer for \"3d7ca243532b33d2f56fab49b1ae2827e3cf0589ecb81f12104fb7515a19bd2c\" returns successfully" Apr 30 03:29:57.097028 containerd[1694]: time="2025-04-30T03:29:57.096574374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c5c4897-5rfmb,Uid:eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03\"" Apr 30 03:29:57.361330 containerd[1694]: time="2025-04-30T03:29:57.360958287Z" level=info msg="StopPodSandbox for \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\"" Apr 30 03:29:57.441197 containerd[1694]: 2025-04-30 03:29:57.409 [INFO][5012] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:29:57.441197 containerd[1694]: 2025-04-30 03:29:57.409 [INFO][5012] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" iface="eth0" netns="/var/run/netns/cni-a5485f83-57e2-908d-7b2b-1efd4b820b07" Apr 30 03:29:57.441197 containerd[1694]: 2025-04-30 03:29:57.410 [INFO][5012] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" iface="eth0" netns="/var/run/netns/cni-a5485f83-57e2-908d-7b2b-1efd4b820b07" Apr 30 03:29:57.441197 containerd[1694]: 2025-04-30 03:29:57.410 [INFO][5012] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" iface="eth0" netns="/var/run/netns/cni-a5485f83-57e2-908d-7b2b-1efd4b820b07" Apr 30 03:29:57.441197 containerd[1694]: 2025-04-30 03:29:57.410 [INFO][5012] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:29:57.441197 containerd[1694]: 2025-04-30 03:29:57.410 [INFO][5012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:29:57.441197 containerd[1694]: 2025-04-30 03:29:57.430 [INFO][5019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" HandleID="k8s-pod-network.0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:29:57.441197 containerd[1694]: 2025-04-30 03:29:57.430 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:57.441197 containerd[1694]: 2025-04-30 03:29:57.430 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:57.441197 containerd[1694]: 2025-04-30 03:29:57.437 [WARNING][5019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" HandleID="k8s-pod-network.0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:29:57.441197 containerd[1694]: 2025-04-30 03:29:57.437 [INFO][5019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" HandleID="k8s-pod-network.0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:29:57.441197 containerd[1694]: 2025-04-30 03:29:57.439 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:57.441197 containerd[1694]: 2025-04-30 03:29:57.440 [INFO][5012] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:29:57.441763 containerd[1694]: time="2025-04-30T03:29:57.441286385Z" level=info msg="TearDown network for sandbox \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\" successfully" Apr 30 03:29:57.441763 containerd[1694]: time="2025-04-30T03:29:57.441319585Z" level=info msg="StopPodSandbox for \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\" returns successfully" Apr 30 03:29:57.442131 containerd[1694]: time="2025-04-30T03:29:57.442077796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f68864dbc-cw5fx,Uid:0becbef8-61a7-43c8-8023-05b47f7451f0,Namespace:calico-system,Attempt:1,}" Apr 30 03:29:57.484511 systemd[1]: run-netns-cni\x2da5485f83\x2d57e2\x2d908d\x2d7b2b\x2d1efd4b820b07.mount: Deactivated successfully. Apr 30 03:29:57.612659 kubelet[3113]: I0430 03:29:57.612321 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8kndr" podStartSLOduration=36.612297221 podStartE2EDuration="36.612297221s" podCreationTimestamp="2025-04-30 03:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:57.611659013 +0000 UTC m=+43.356927268" watchObservedRunningTime="2025-04-30 03:29:57.612297221 +0000 UTC m=+43.357565476" Apr 30 03:29:57.897705 systemd-networkd[1331]: cali83f2ec694bf: Link UP Apr 30 03:29:57.897966 systemd-networkd[1331]: cali83f2ec694bf: Gained carrier Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.678 [INFO][5029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0 calico-kube-controllers-5f68864dbc- calico-system 0becbef8-61a7-43c8-8023-05b47f7451f0 811 0 2025-04-30 03:29:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f68864dbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-a-afe39379c7 calico-kube-controllers-5f68864dbc-cw5fx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali83f2ec694bf [] []}} ContainerID="e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" Namespace="calico-system" Pod="calico-kube-controllers-5f68864dbc-cw5fx" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-" Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.684 [INFO][5029] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" Namespace="calico-system" Pod="calico-kube-controllers-5f68864dbc-cw5fx" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.754 [INFO][5044] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" HandleID="k8s-pod-network.e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.771 [INFO][5044] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" HandleID="k8s-pod-network.e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-afe39379c7", "pod":"calico-kube-controllers-5f68864dbc-cw5fx", "timestamp":"2025-04-30 03:29:57.754837969 +0000 UTC"}, Hostname:"ci-4081.3.3-a-afe39379c7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.771 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.771 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.771 [INFO][5044] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-afe39379c7' Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.777 [INFO][5044] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.791 [INFO][5044] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.811 [INFO][5044] ipam/ipam.go 489: Trying affinity for 192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.816 [INFO][5044] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.831 [INFO][5044] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.831 [INFO][5044] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.842 [INFO][5044] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.856 [INFO][5044] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.881 [INFO][5044] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.5/26] block=192.168.79.0/26 handle="k8s-pod-network.e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.881 [INFO][5044] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.5/26] handle="k8s-pod-network.e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.881 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:57.936483 containerd[1694]: 2025-04-30 03:29:57.881 [INFO][5044] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.5/26] IPv6=[] ContainerID="e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" HandleID="k8s-pod-network.e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:29:57.937700 containerd[1694]: 2025-04-30 03:29:57.887 [INFO][5029] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" Namespace="calico-system" Pod="calico-kube-controllers-5f68864dbc-cw5fx" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0", GenerateName:"calico-kube-controllers-5f68864dbc-", Namespace:"calico-system", SelfLink:"", UID:"0becbef8-61a7-43c8-8023-05b47f7451f0", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f68864dbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"", Pod:"calico-kube-controllers-5f68864dbc-cw5fx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali83f2ec694bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:57.937700 containerd[1694]: 2025-04-30 03:29:57.888 [INFO][5029] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.5/32] ContainerID="e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" Namespace="calico-system" Pod="calico-kube-controllers-5f68864dbc-cw5fx" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:29:57.937700 containerd[1694]: 2025-04-30 03:29:57.888 [INFO][5029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83f2ec694bf ContainerID="e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" Namespace="calico-system" Pod="calico-kube-controllers-5f68864dbc-cw5fx" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:29:57.937700 containerd[1694]: 2025-04-30 03:29:57.897 [INFO][5029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" Namespace="calico-system" Pod="calico-kube-controllers-5f68864dbc-cw5fx" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:29:57.937700 containerd[1694]: 2025-04-30 03:29:57.898 [INFO][5029] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" Namespace="calico-system" Pod="calico-kube-controllers-5f68864dbc-cw5fx" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0", GenerateName:"calico-kube-controllers-5f68864dbc-", Namespace:"calico-system", SelfLink:"", UID:"0becbef8-61a7-43c8-8023-05b47f7451f0", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f68864dbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e", Pod:"calico-kube-controllers-5f68864dbc-cw5fx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali83f2ec694bf", MAC:"da:e7:7a:2c:83:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:57.937700 containerd[1694]: 2025-04-30 03:29:57.931 [INFO][5029] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e" Namespace="calico-system" Pod="calico-kube-controllers-5f68864dbc-cw5fx" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:29:57.995596 containerd[1694]: time="2025-04-30T03:29:57.995277341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:57.995596 containerd[1694]: time="2025-04-30T03:29:57.995329942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:57.995596 containerd[1694]: time="2025-04-30T03:29:57.995358943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:57.995596 containerd[1694]: time="2025-04-30T03:29:57.995471044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:58.054885 systemd[1]: Started cri-containerd-e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e.scope - libcontainer container e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e. Apr 30 03:29:58.229358 containerd[1694]: time="2025-04-30T03:29:58.229313600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f68864dbc-cw5fx,Uid:0becbef8-61a7-43c8-8023-05b47f7451f0,Namespace:calico-system,Attempt:1,} returns sandbox id \"e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e\"" Apr 30 03:29:58.334403 systemd-networkd[1331]: calie7e222c13a1: Gained IPv6LL Apr 30 03:29:58.336041 systemd-networkd[1331]: cali744bf05f77f: Gained IPv6LL Apr 30 03:29:58.365446 containerd[1694]: time="2025-04-30T03:29:58.365342252Z" level=info msg="StopPodSandbox for \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\"" Apr 30 03:29:58.565251 containerd[1694]: 2025-04-30 03:29:58.491 [INFO][5129] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:29:58.565251 containerd[1694]: 2025-04-30 03:29:58.494 [INFO][5129] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" iface="eth0" netns="/var/run/netns/cni-91434d97-778f-e755-c9ca-19166af4a3bb" Apr 30 03:29:58.565251 containerd[1694]: 2025-04-30 03:29:58.494 [INFO][5129] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" iface="eth0" netns="/var/run/netns/cni-91434d97-778f-e755-c9ca-19166af4a3bb" Apr 30 03:29:58.565251 containerd[1694]: 2025-04-30 03:29:58.495 [INFO][5129] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" iface="eth0" netns="/var/run/netns/cni-91434d97-778f-e755-c9ca-19166af4a3bb" Apr 30 03:29:58.565251 containerd[1694]: 2025-04-30 03:29:58.495 [INFO][5129] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:29:58.565251 containerd[1694]: 2025-04-30 03:29:58.495 [INFO][5129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:29:58.565251 containerd[1694]: 2025-04-30 03:29:58.547 [INFO][5136] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" HandleID="k8s-pod-network.af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:29:58.565251 containerd[1694]: 2025-04-30 03:29:58.547 [INFO][5136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:58.565251 containerd[1694]: 2025-04-30 03:29:58.548 [INFO][5136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:58.565251 containerd[1694]: 2025-04-30 03:29:58.556 [WARNING][5136] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" HandleID="k8s-pod-network.af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:29:58.565251 containerd[1694]: 2025-04-30 03:29:58.556 [INFO][5136] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" HandleID="k8s-pod-network.af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:29:58.565251 containerd[1694]: 2025-04-30 03:29:58.557 [INFO][5136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:58.565251 containerd[1694]: 2025-04-30 03:29:58.561 [INFO][5129] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:29:58.572022 containerd[1694]: time="2025-04-30T03:29:58.568909573Z" level=info msg="TearDown network for sandbox \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\" successfully" Apr 30 03:29:58.572022 containerd[1694]: time="2025-04-30T03:29:58.568948873Z" level=info msg="StopPodSandbox for \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\" returns successfully" Apr 30 03:29:58.571966 systemd[1]: run-netns-cni\x2d91434d97\x2d778f\x2de755\x2dc9ca\x2d19166af4a3bb.mount: Deactivated successfully. Apr 30 03:29:58.572773 containerd[1694]: time="2025-04-30T03:29:58.572715127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bkvhq,Uid:6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a,Namespace:kube-system,Attempt:1,}" Apr 30 03:29:58.795087 systemd-networkd[1331]: cali618cf6a0ae6: Link UP Apr 30 03:29:58.795744 systemd-networkd[1331]: cali618cf6a0ae6: Gained carrier Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.676 [INFO][5142] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0 coredns-6f6b679f8f- kube-system 6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a 827 0 2025-04-30 03:29:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-afe39379c7 coredns-6f6b679f8f-bkvhq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali618cf6a0ae6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" Namespace="kube-system" Pod="coredns-6f6b679f8f-bkvhq" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-" Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.677 [INFO][5142] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" Namespace="kube-system" Pod="coredns-6f6b679f8f-bkvhq" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.726 [INFO][5155] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" HandleID="k8s-pod-network.c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.740 [INFO][5155] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" HandleID="k8s-pod-network.c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ce30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-afe39379c7", "pod":"coredns-6f6b679f8f-bkvhq", "timestamp":"2025-04-30 03:29:58.72622023 +0000 UTC"}, Hostname:"ci-4081.3.3-a-afe39379c7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.740 [INFO][5155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.740 [INFO][5155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.740 [INFO][5155] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-afe39379c7' Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.742 [INFO][5155] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.747 [INFO][5155] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.754 [INFO][5155] ipam/ipam.go 489: Trying affinity for 192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.756 [INFO][5155] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.759 [INFO][5155] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.759 [INFO][5155] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.761 [INFO][5155] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.771 [INFO][5155] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.785 [INFO][5155] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.6/26] block=192.168.79.0/26 handle="k8s-pod-network.c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.785 [INFO][5155] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.6/26] handle="k8s-pod-network.c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" host="ci-4081.3.3-a-afe39379c7" Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.786 [INFO][5155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:58.816827 containerd[1694]: 2025-04-30 03:29:58.786 [INFO][5155] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.6/26] IPv6=[] ContainerID="c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" HandleID="k8s-pod-network.c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:29:58.819711 containerd[1694]: 2025-04-30 03:29:58.789 [INFO][5142] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" Namespace="kube-system" Pod="coredns-6f6b679f8f-bkvhq" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"", Pod:"coredns-6f6b679f8f-bkvhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali618cf6a0ae6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:58.819711 containerd[1694]: 2025-04-30 03:29:58.789 [INFO][5142] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.6/32] ContainerID="c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" Namespace="kube-system" Pod="coredns-6f6b679f8f-bkvhq" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:29:58.819711 containerd[1694]: 2025-04-30 03:29:58.790 [INFO][5142] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali618cf6a0ae6 ContainerID="c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" Namespace="kube-system" Pod="coredns-6f6b679f8f-bkvhq" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:29:58.819711 containerd[1694]: 2025-04-30 03:29:58.793 [INFO][5142] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" Namespace="kube-system" Pod="coredns-6f6b679f8f-bkvhq" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:29:58.819711 containerd[1694]: 2025-04-30 03:29:58.794 [INFO][5142] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" Namespace="kube-system" Pod="coredns-6f6b679f8f-bkvhq" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba", Pod:"coredns-6f6b679f8f-bkvhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali618cf6a0ae6", MAC:"6a:c8:5d:23:85:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:58.819711 containerd[1694]: 2025-04-30 03:29:58.811 [INFO][5142] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba" Namespace="kube-system" Pod="coredns-6f6b679f8f-bkvhq" WorkloadEndpoint="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:29:58.857006 containerd[1694]: time="2025-04-30T03:29:58.856697302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:58.857006 containerd[1694]: time="2025-04-30T03:29:58.856785104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:58.857006 containerd[1694]: time="2025-04-30T03:29:58.856800104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:58.857006 containerd[1694]: time="2025-04-30T03:29:58.856897205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:58.893462 systemd[1]: Started cri-containerd-c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba.scope - libcontainer container c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba. Apr 30 03:29:58.973552 containerd[1694]: time="2025-04-30T03:29:58.973509179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bkvhq,Uid:6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a,Namespace:kube-system,Attempt:1,} returns sandbox id \"c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba\"" Apr 30 03:29:58.980606 containerd[1694]: time="2025-04-30T03:29:58.980347577Z" level=info msg="CreateContainer within sandbox \"c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:29:59.025914 containerd[1694]: time="2025-04-30T03:29:59.025726728Z" level=info msg="CreateContainer within sandbox \"c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e648909afd2026d6b16e5a098428da98310c25c97d72cf7c75db541333318208\"" Apr 30 03:29:59.028017 containerd[1694]: time="2025-04-30T03:29:59.027983760Z" level=info msg="StartContainer for \"e648909afd2026d6b16e5a098428da98310c25c97d72cf7c75db541333318208\"" Apr 30 03:29:59.069511 systemd[1]: Started cri-containerd-e648909afd2026d6b16e5a098428da98310c25c97d72cf7c75db541333318208.scope - libcontainer container e648909afd2026d6b16e5a098428da98310c25c97d72cf7c75db541333318208. Apr 30 03:29:59.123571 containerd[1694]: time="2025-04-30T03:29:59.123516331Z" level=info msg="StartContainer for \"e648909afd2026d6b16e5a098428da98310c25c97d72cf7c75db541333318208\" returns successfully" Apr 30 03:29:59.308024 containerd[1694]: time="2025-04-30T03:29:59.307975078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:59.310140 containerd[1694]: time="2025-04-30T03:29:59.310071008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:29:59.316361 containerd[1694]: time="2025-04-30T03:29:59.313975864Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:59.322399 containerd[1694]: time="2025-04-30T03:29:59.322256983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:59.323499 containerd[1694]: time="2025-04-30T03:29:59.323425900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 4.039395893s" Apr 30 03:29:59.323499 containerd[1694]: time="2025-04-30T03:29:59.323478401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:29:59.326118 containerd[1694]: time="2025-04-30T03:29:59.326087638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:29:59.327258 containerd[1694]: time="2025-04-30T03:29:59.327225054Z" level=info msg="CreateContainer within sandbox \"051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:29:59.359636 containerd[1694]: time="2025-04-30T03:29:59.359591819Z" level=info msg="CreateContainer within sandbox \"051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"17fad85cd7dc9e50d39bbc8a7e000173fdb88bf80604477bbd827cb67a1204d4\"" Apr 30 03:29:59.360283 containerd[1694]: time="2025-04-30T03:29:59.360214328Z" level=info msg="StartContainer for \"17fad85cd7dc9e50d39bbc8a7e000173fdb88bf80604477bbd827cb67a1204d4\"" Apr 30 03:29:59.387364 systemd[1]: Started cri-containerd-17fad85cd7dc9e50d39bbc8a7e000173fdb88bf80604477bbd827cb67a1204d4.scope - libcontainer container 17fad85cd7dc9e50d39bbc8a7e000173fdb88bf80604477bbd827cb67a1204d4. Apr 30 03:29:59.432430 containerd[1694]: time="2025-04-30T03:29:59.432376963Z" level=info msg="StartContainer for \"17fad85cd7dc9e50d39bbc8a7e000173fdb88bf80604477bbd827cb67a1204d4\" returns successfully" Apr 30 03:29:59.628144 kubelet[3113]: I0430 03:29:59.627637 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f7c5c4897-9tqhl" podStartSLOduration=28.04215771 podStartE2EDuration="32.627606665s" podCreationTimestamp="2025-04-30 03:29:27 +0000 UTC" firstStartedPulling="2025-04-30 03:29:54.739585368 +0000 UTC m=+40.484853723" lastFinishedPulling="2025-04-30 03:29:59.325034423 +0000 UTC m=+45.070302678" observedRunningTime="2025-04-30 03:29:59.62661915 +0000 UTC m=+45.371887505" watchObservedRunningTime="2025-04-30 03:29:59.627606665 +0000 UTC m=+45.372875020" Apr 30 03:29:59.650392 kubelet[3113]: I0430 03:29:59.649937 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bkvhq" podStartSLOduration=38.649901385 podStartE2EDuration="38.649901385s" podCreationTimestamp="2025-04-30 03:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:59.64960998 +0000 UTC m=+45.394878235" watchObservedRunningTime="2025-04-30 03:29:59.649901385 +0000 UTC m=+45.395169740" Apr 30 03:29:59.935399 systemd-networkd[1331]: cali83f2ec694bf: Gained IPv6LL Apr 30 03:30:00.510739 systemd-networkd[1331]: cali618cf6a0ae6: Gained IPv6LL Apr 30 03:30:01.153427 containerd[1694]: time="2025-04-30T03:30:01.153372659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:01.155244 containerd[1694]: time="2025-04-30T03:30:01.155168685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:30:01.159251 containerd[1694]: time="2025-04-30T03:30:01.159176942Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:01.165073 containerd[1694]: time="2025-04-30T03:30:01.165016226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:01.166131 containerd[1694]: time="2025-04-30T03:30:01.165627235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.83839138s" Apr 30 03:30:01.166131 containerd[1694]: time="2025-04-30T03:30:01.165670135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:30:01.166720 containerd[1694]: time="2025-04-30T03:30:01.166693950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:30:01.168579 containerd[1694]: time="2025-04-30T03:30:01.168337474Z" level=info msg="CreateContainer within sandbox \"c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:30:01.203653 containerd[1694]: time="2025-04-30T03:30:01.203611780Z" level=info msg="CreateContainer within sandbox \"c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"97e2af270e7658680038f609fde2db683c582fde45d31288117a12ec3a3906ca\"" Apr 30 03:30:01.204454 containerd[1694]: time="2025-04-30T03:30:01.204408691Z" level=info msg="StartContainer for \"97e2af270e7658680038f609fde2db683c582fde45d31288117a12ec3a3906ca\"" Apr 30 03:30:01.243408 systemd[1]: run-containerd-runc-k8s.io-97e2af270e7658680038f609fde2db683c582fde45d31288117a12ec3a3906ca-runc.y6SQ5r.mount: Deactivated successfully. Apr 30 03:30:01.253360 systemd[1]: Started cri-containerd-97e2af270e7658680038f609fde2db683c582fde45d31288117a12ec3a3906ca.scope - libcontainer container 97e2af270e7658680038f609fde2db683c582fde45d31288117a12ec3a3906ca. Apr 30 03:30:01.300975 containerd[1694]: time="2025-04-30T03:30:01.300924076Z" level=info msg="StartContainer for \"97e2af270e7658680038f609fde2db683c582fde45d31288117a12ec3a3906ca\" returns successfully" Apr 30 03:30:01.446387 kubelet[3113]: I0430 03:30:01.446262 3113 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:30:01.446387 kubelet[3113]: I0430 03:30:01.446301 3113 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:30:01.741299 containerd[1694]: time="2025-04-30T03:30:01.741245295Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:01.745387 containerd[1694]: time="2025-04-30T03:30:01.745324053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:30:01.747642 containerd[1694]: time="2025-04-30T03:30:01.747608386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 580.875836ms" Apr 30 03:30:01.747642 containerd[1694]: time="2025-04-30T03:30:01.747642586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:30:01.749038 containerd[1694]: time="2025-04-30T03:30:01.748616000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:30:01.750973 containerd[1694]: time="2025-04-30T03:30:01.750942434Z" level=info msg="CreateContainer within sandbox \"09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:30:01.782783 containerd[1694]: time="2025-04-30T03:30:01.782740890Z" level=info msg="CreateContainer within sandbox \"09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4d9802e4ba5e7d9af207dd54ae81e6d85368d973e036bdae5e08196b3eb94917\"" Apr 30 03:30:01.783533 containerd[1694]: time="2025-04-30T03:30:01.783356999Z" level=info msg="StartContainer for \"4d9802e4ba5e7d9af207dd54ae81e6d85368d973e036bdae5e08196b3eb94917\"" Apr 30 03:30:01.810366 systemd[1]: Started cri-containerd-4d9802e4ba5e7d9af207dd54ae81e6d85368d973e036bdae5e08196b3eb94917.scope - libcontainer container 4d9802e4ba5e7d9af207dd54ae81e6d85368d973e036bdae5e08196b3eb94917. Apr 30 03:30:01.858501 containerd[1694]: time="2025-04-30T03:30:01.858348375Z" level=info msg="StartContainer for \"4d9802e4ba5e7d9af207dd54ae81e6d85368d973e036bdae5e08196b3eb94917\" returns successfully" Apr 30 03:30:02.645446 kubelet[3113]: I0430 03:30:02.645378 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-f8qxp" podStartSLOduration=28.166813219 podStartE2EDuration="35.645352268s" podCreationTimestamp="2025-04-30 03:29:27 +0000 UTC" firstStartedPulling="2025-04-30 03:29:53.687950498 +0000 UTC m=+39.433218753" lastFinishedPulling="2025-04-30 03:30:01.166489547 +0000 UTC m=+46.911757802" observedRunningTime="2025-04-30 03:30:01.634358761 +0000 UTC m=+47.379627016" watchObservedRunningTime="2025-04-30 03:30:02.645352268 +0000 UTC m=+48.390620623" Apr 30 03:30:03.626505 kubelet[3113]: I0430 03:30:03.626468 3113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:04.444440 containerd[1694]: time="2025-04-30T03:30:04.444392284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:04.446239 containerd[1694]: time="2025-04-30T03:30:04.446162109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:30:04.449961 containerd[1694]: time="2025-04-30T03:30:04.449908063Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:04.453906 containerd[1694]: time="2025-04-30T03:30:04.453847519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:04.454852 containerd[1694]: time="2025-04-30T03:30:04.454441128Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.705790727s" Apr 30 03:30:04.454852 containerd[1694]: time="2025-04-30T03:30:04.454485629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:30:04.473771 containerd[1694]: time="2025-04-30T03:30:04.473630203Z" level=info msg="CreateContainer within sandbox \"e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:30:04.507701 containerd[1694]: time="2025-04-30T03:30:04.507646391Z" level=info msg="CreateContainer within sandbox \"e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"99411712d1b4aaa44cb0a4b9e2e6326c6e655c9bade85be7609b4ceada26baa5\"" Apr 30 03:30:04.508456 containerd[1694]: time="2025-04-30T03:30:04.508302001Z" level=info msg="StartContainer for \"99411712d1b4aaa44cb0a4b9e2e6326c6e655c9bade85be7609b4ceada26baa5\"" Apr 30 03:30:04.555366 systemd[1]: Started cri-containerd-99411712d1b4aaa44cb0a4b9e2e6326c6e655c9bade85be7609b4ceada26baa5.scope - libcontainer container 99411712d1b4aaa44cb0a4b9e2e6326c6e655c9bade85be7609b4ceada26baa5. Apr 30 03:30:04.608771 containerd[1694]: time="2025-04-30T03:30:04.608611140Z" level=info msg="StartContainer for \"99411712d1b4aaa44cb0a4b9e2e6326c6e655c9bade85be7609b4ceada26baa5\" returns successfully" Apr 30 03:30:04.651926 kubelet[3113]: I0430 03:30:04.651844 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f7c5c4897-5rfmb" podStartSLOduration=33.002996479 podStartE2EDuration="37.65182086s" podCreationTimestamp="2025-04-30 03:29:27 +0000 UTC" firstStartedPulling="2025-04-30 03:29:57.099654817 +0000 UTC m=+42.844923072" lastFinishedPulling="2025-04-30 03:30:01.748479098 +0000 UTC m=+47.493747453" observedRunningTime="2025-04-30 03:30:02.645990977 +0000 UTC m=+48.391259332" watchObservedRunningTime="2025-04-30 03:30:04.65182086 +0000 UTC m=+50.397089215" Apr 30 03:30:04.653026 kubelet[3113]: I0430 03:30:04.651967 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f68864dbc-cw5fx" podStartSLOduration=31.430354784 podStartE2EDuration="37.651960462s" podCreationTimestamp="2025-04-30 03:29:27 +0000 UTC" firstStartedPulling="2025-04-30 03:29:58.233671262 +0000 UTC m=+43.978939517" lastFinishedPulling="2025-04-30 03:30:04.45527694 +0000 UTC m=+50.200545195" observedRunningTime="2025-04-30 03:30:04.650098535 +0000 UTC m=+50.395366890" watchObservedRunningTime="2025-04-30 03:30:04.651960462 +0000 UTC m=+50.397228817" Apr 30 03:30:14.356520 containerd[1694]: time="2025-04-30T03:30:14.356473650Z" level=info msg="StopPodSandbox for \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\"" Apr 30 03:30:14.424637 containerd[1694]: 2025-04-30 03:30:14.393 [WARNING][5475] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0", GenerateName:"calico-apiserver-7f7c5c4897-", Namespace:"calico-apiserver", SelfLink:"", UID:"e953b696-38c7-4e44-a006-527e804faa59", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c5c4897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392", Pod:"calico-apiserver-7f7c5c4897-9tqhl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califae6339731d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:14.424637 containerd[1694]: 2025-04-30 03:30:14.394 [INFO][5475] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:30:14.424637 containerd[1694]: 2025-04-30 03:30:14.394 [INFO][5475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" iface="eth0" netns="" Apr 30 03:30:14.424637 containerd[1694]: 2025-04-30 03:30:14.394 [INFO][5475] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:30:14.424637 containerd[1694]: 2025-04-30 03:30:14.394 [INFO][5475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:30:14.424637 containerd[1694]: 2025-04-30 03:30:14.413 [INFO][5484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" HandleID="k8s-pod-network.0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:30:14.424637 containerd[1694]: 2025-04-30 03:30:14.414 [INFO][5484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:14.424637 containerd[1694]: 2025-04-30 03:30:14.414 [INFO][5484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:14.424637 containerd[1694]: 2025-04-30 03:30:14.421 [WARNING][5484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" HandleID="k8s-pod-network.0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:30:14.424637 containerd[1694]: 2025-04-30 03:30:14.421 [INFO][5484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" HandleID="k8s-pod-network.0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:30:14.424637 containerd[1694]: 2025-04-30 03:30:14.422 [INFO][5484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:14.424637 containerd[1694]: 2025-04-30 03:30:14.423 [INFO][5475] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:30:14.425121 containerd[1694]: time="2025-04-30T03:30:14.424648121Z" level=info msg="TearDown network for sandbox \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\" successfully" Apr 30 03:30:14.425121 containerd[1694]: time="2025-04-30T03:30:14.424680222Z" level=info msg="StopPodSandbox for \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\" returns successfully" Apr 30 03:30:14.425734 containerd[1694]: time="2025-04-30T03:30:14.425704336Z" level=info msg="RemovePodSandbox for \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\"" Apr 30 03:30:14.425860 containerd[1694]: time="2025-04-30T03:30:14.425742237Z" level=info msg="Forcibly stopping sandbox \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\"" Apr 30 03:30:14.496692 containerd[1694]: 2025-04-30 03:30:14.466 [WARNING][5502] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0", GenerateName:"calico-apiserver-7f7c5c4897-", Namespace:"calico-apiserver", SelfLink:"", UID:"e953b696-38c7-4e44-a006-527e804faa59", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c5c4897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"051559596943cea67f2b6a21241234ddd9caace1404f1beccb2d369cf4b52392", Pod:"calico-apiserver-7f7c5c4897-9tqhl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califae6339731d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:14.496692 containerd[1694]: 2025-04-30 03:30:14.466 [INFO][5502] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:30:14.496692 containerd[1694]: 2025-04-30 03:30:14.466 [INFO][5502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" iface="eth0" netns="" Apr 30 03:30:14.496692 containerd[1694]: 2025-04-30 03:30:14.466 [INFO][5502] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:30:14.496692 containerd[1694]: 2025-04-30 03:30:14.466 [INFO][5502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:30:14.496692 containerd[1694]: 2025-04-30 03:30:14.485 [INFO][5510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" HandleID="k8s-pod-network.0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:30:14.496692 containerd[1694]: 2025-04-30 03:30:14.485 [INFO][5510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:14.496692 containerd[1694]: 2025-04-30 03:30:14.485 [INFO][5510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:14.496692 containerd[1694]: 2025-04-30 03:30:14.492 [WARNING][5510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" HandleID="k8s-pod-network.0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:30:14.496692 containerd[1694]: 2025-04-30 03:30:14.492 [INFO][5510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" HandleID="k8s-pod-network.0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--9tqhl-eth0" Apr 30 03:30:14.496692 containerd[1694]: 2025-04-30 03:30:14.493 [INFO][5510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:14.496692 containerd[1694]: 2025-04-30 03:30:14.494 [INFO][5502] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360" Apr 30 03:30:14.496692 containerd[1694]: time="2025-04-30T03:30:14.495435029Z" level=info msg="TearDown network for sandbox \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\" successfully" Apr 30 03:30:14.502791 containerd[1694]: time="2025-04-30T03:30:14.502742633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:14.502942 containerd[1694]: time="2025-04-30T03:30:14.502826634Z" level=info msg="RemovePodSandbox \"0aef7e3abb31ea3da59e3aa99b2b7b6de7951b1626b848ca57e55550a172a360\" returns successfully" Apr 30 03:30:14.503506 containerd[1694]: time="2025-04-30T03:30:14.503477844Z" level=info msg="StopPodSandbox for \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\"" Apr 30 03:30:14.564082 containerd[1694]: 2025-04-30 03:30:14.536 [WARNING][5528] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ca18dcc-f415-45c0-be2d-91e3486ac03d", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045", Pod:"csi-node-driver-f8qxp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaae0766ac59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:14.564082 containerd[1694]: 2025-04-30 03:30:14.536 [INFO][5528] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:30:14.564082 containerd[1694]: 2025-04-30 03:30:14.536 [INFO][5528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" iface="eth0" netns="" Apr 30 03:30:14.564082 containerd[1694]: 2025-04-30 03:30:14.536 [INFO][5528] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:30:14.564082 containerd[1694]: 2025-04-30 03:30:14.536 [INFO][5528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:30:14.564082 containerd[1694]: 2025-04-30 03:30:14.555 [INFO][5535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" HandleID="k8s-pod-network.70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Workload="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:30:14.564082 containerd[1694]: 2025-04-30 03:30:14.555 [INFO][5535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:14.564082 containerd[1694]: 2025-04-30 03:30:14.555 [INFO][5535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:14.564082 containerd[1694]: 2025-04-30 03:30:14.560 [WARNING][5535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" HandleID="k8s-pod-network.70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Workload="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:30:14.564082 containerd[1694]: 2025-04-30 03:30:14.560 [INFO][5535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" HandleID="k8s-pod-network.70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Workload="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:30:14.564082 containerd[1694]: 2025-04-30 03:30:14.562 [INFO][5535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:14.564082 containerd[1694]: 2025-04-30 03:30:14.563 [INFO][5528] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:30:14.564801 containerd[1694]: time="2025-04-30T03:30:14.564084707Z" level=info msg="TearDown network for sandbox \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\" successfully" Apr 30 03:30:14.564801 containerd[1694]: time="2025-04-30T03:30:14.564115707Z" level=info msg="StopPodSandbox for \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\" returns successfully" Apr 30 03:30:14.564801 containerd[1694]: time="2025-04-30T03:30:14.564654215Z" level=info msg="RemovePodSandbox for \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\"" Apr 30 03:30:14.564801 containerd[1694]: time="2025-04-30T03:30:14.564685515Z" level=info msg="Forcibly stopping sandbox \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\"" Apr 30 03:30:14.636250 containerd[1694]: 2025-04-30 03:30:14.601 [WARNING][5554] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ca18dcc-f415-45c0-be2d-91e3486ac03d", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"c8c95c259cc5ec31966a6a5e60c7ca7ebcf65568372dcfec8d1fcd0a05fa9045", Pod:"csi-node-driver-f8qxp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaae0766ac59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:14.636250 containerd[1694]: 2025-04-30 03:30:14.601 [INFO][5554] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:30:14.636250 containerd[1694]: 2025-04-30 03:30:14.601 [INFO][5554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" iface="eth0" netns="" Apr 30 03:30:14.636250 containerd[1694]: 2025-04-30 03:30:14.601 [INFO][5554] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:30:14.636250 containerd[1694]: 2025-04-30 03:30:14.601 [INFO][5554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:30:14.636250 containerd[1694]: 2025-04-30 03:30:14.619 [INFO][5561] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" HandleID="k8s-pod-network.70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Workload="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:30:14.636250 containerd[1694]: 2025-04-30 03:30:14.619 [INFO][5561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:14.636250 containerd[1694]: 2025-04-30 03:30:14.619 [INFO][5561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:14.636250 containerd[1694]: 2025-04-30 03:30:14.627 [WARNING][5561] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" HandleID="k8s-pod-network.70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Workload="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:30:14.636250 containerd[1694]: 2025-04-30 03:30:14.627 [INFO][5561] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" HandleID="k8s-pod-network.70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Workload="ci--4081.3.3--a--afe39379c7-k8s-csi--node--driver--f8qxp-eth0" Apr 30 03:30:14.636250 containerd[1694]: 2025-04-30 03:30:14.629 [INFO][5561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:14.636250 containerd[1694]: 2025-04-30 03:30:14.632 [INFO][5554] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30" Apr 30 03:30:14.636250 containerd[1694]: time="2025-04-30T03:30:14.633589996Z" level=info msg="TearDown network for sandbox \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\" successfully" Apr 30 03:30:14.642910 containerd[1694]: time="2025-04-30T03:30:14.642867928Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:14.643356 containerd[1694]: time="2025-04-30T03:30:14.643277234Z" level=info msg="RemovePodSandbox \"70e80e6fad23e8310f8dbb37f8456175284e6ef5fc9fb0250f31e3bc3b3f6e30\" returns successfully" Apr 30 03:30:14.644027 containerd[1694]: time="2025-04-30T03:30:14.644004545Z" level=info msg="StopPodSandbox for \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\"" Apr 30 03:30:14.714119 containerd[1694]: 2025-04-30 03:30:14.684 [WARNING][5580] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0", GenerateName:"calico-kube-controllers-5f68864dbc-", Namespace:"calico-system", SelfLink:"", UID:"0becbef8-61a7-43c8-8023-05b47f7451f0", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f68864dbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e", Pod:"calico-kube-controllers-5f68864dbc-cw5fx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali83f2ec694bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:14.714119 containerd[1694]: 2025-04-30 03:30:14.684 [INFO][5580] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:30:14.714119 containerd[1694]: 2025-04-30 03:30:14.684 [INFO][5580] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" iface="eth0" netns="" Apr 30 03:30:14.714119 containerd[1694]: 2025-04-30 03:30:14.684 [INFO][5580] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:30:14.714119 containerd[1694]: 2025-04-30 03:30:14.684 [INFO][5580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:30:14.714119 containerd[1694]: 2025-04-30 03:30:14.705 [INFO][5587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" HandleID="k8s-pod-network.0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:30:14.714119 containerd[1694]: 2025-04-30 03:30:14.705 [INFO][5587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:14.714119 containerd[1694]: 2025-04-30 03:30:14.705 [INFO][5587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:14.714119 containerd[1694]: 2025-04-30 03:30:14.710 [WARNING][5587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" HandleID="k8s-pod-network.0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:30:14.714119 containerd[1694]: 2025-04-30 03:30:14.710 [INFO][5587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" HandleID="k8s-pod-network.0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:30:14.714119 containerd[1694]: 2025-04-30 03:30:14.711 [INFO][5587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:14.714119 containerd[1694]: 2025-04-30 03:30:14.712 [INFO][5580] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:30:14.714119 containerd[1694]: time="2025-04-30T03:30:14.713990241Z" level=info msg="TearDown network for sandbox \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\" successfully" Apr 30 03:30:14.714119 containerd[1694]: time="2025-04-30T03:30:14.714018741Z" level=info msg="StopPodSandbox for \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\" returns successfully" Apr 30 03:30:14.714869 containerd[1694]: time="2025-04-30T03:30:14.714825753Z" level=info msg="RemovePodSandbox for \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\"" Apr 30 03:30:14.714869 containerd[1694]: time="2025-04-30T03:30:14.714867554Z" level=info msg="Forcibly stopping sandbox \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\"" Apr 30 03:30:14.781436 containerd[1694]: 2025-04-30 03:30:14.750 [WARNING][5605] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0", GenerateName:"calico-kube-controllers-5f68864dbc-", Namespace:"calico-system", SelfLink:"", UID:"0becbef8-61a7-43c8-8023-05b47f7451f0", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f68864dbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"e691198294ef103ff4ea7846dc327c393190496d5995ee00b614b695bfb3f99e", Pod:"calico-kube-controllers-5f68864dbc-cw5fx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali83f2ec694bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:14.781436 containerd[1694]: 2025-04-30 03:30:14.750 [INFO][5605] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:30:14.781436 containerd[1694]: 2025-04-30 03:30:14.750 [INFO][5605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" iface="eth0" netns="" Apr 30 03:30:14.781436 containerd[1694]: 2025-04-30 03:30:14.750 [INFO][5605] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:30:14.781436 containerd[1694]: 2025-04-30 03:30:14.750 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:30:14.781436 containerd[1694]: 2025-04-30 03:30:14.770 [INFO][5613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" HandleID="k8s-pod-network.0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:30:14.781436 containerd[1694]: 2025-04-30 03:30:14.770 [INFO][5613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:14.781436 containerd[1694]: 2025-04-30 03:30:14.770 [INFO][5613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:14.781436 containerd[1694]: 2025-04-30 03:30:14.777 [WARNING][5613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" HandleID="k8s-pod-network.0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:30:14.781436 containerd[1694]: 2025-04-30 03:30:14.777 [INFO][5613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" HandleID="k8s-pod-network.0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--kube--controllers--5f68864dbc--cw5fx-eth0" Apr 30 03:30:14.781436 containerd[1694]: 2025-04-30 03:30:14.779 [INFO][5613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:14.781436 containerd[1694]: 2025-04-30 03:30:14.780 [INFO][5605] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc" Apr 30 03:30:14.782282 containerd[1694]: time="2025-04-30T03:30:14.781484902Z" level=info msg="TearDown network for sandbox \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\" successfully" Apr 30 03:30:14.788514 containerd[1694]: time="2025-04-30T03:30:14.788475602Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:14.788636 containerd[1694]: time="2025-04-30T03:30:14.788544703Z" level=info msg="RemovePodSandbox \"0199f3b238a2b17093031211fc8340fdc0f465886663a6a5b9159b3893b01fcc\" returns successfully" Apr 30 03:30:14.789122 containerd[1694]: time="2025-04-30T03:30:14.789086510Z" level=info msg="StopPodSandbox for \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\"" Apr 30 03:30:14.855682 containerd[1694]: 2025-04-30 03:30:14.822 [WARNING][5632] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0", GenerateName:"calico-apiserver-7f7c5c4897-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c5c4897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03", Pod:"calico-apiserver-7f7c5c4897-5rfmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7e222c13a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:14.855682 containerd[1694]: 2025-04-30 03:30:14.823 [INFO][5632] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:30:14.855682 containerd[1694]: 2025-04-30 03:30:14.823 [INFO][5632] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" iface="eth0" netns="" Apr 30 03:30:14.855682 containerd[1694]: 2025-04-30 03:30:14.823 [INFO][5632] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:30:14.855682 containerd[1694]: 2025-04-30 03:30:14.823 [INFO][5632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:30:14.855682 containerd[1694]: 2025-04-30 03:30:14.846 [INFO][5639] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" HandleID="k8s-pod-network.8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:30:14.855682 containerd[1694]: 2025-04-30 03:30:14.846 [INFO][5639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:14.855682 containerd[1694]: 2025-04-30 03:30:14.846 [INFO][5639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:14.855682 containerd[1694]: 2025-04-30 03:30:14.852 [WARNING][5639] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" HandleID="k8s-pod-network.8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:30:14.855682 containerd[1694]: 2025-04-30 03:30:14.852 [INFO][5639] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" HandleID="k8s-pod-network.8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:30:14.855682 containerd[1694]: 2025-04-30 03:30:14.853 [INFO][5639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:14.855682 containerd[1694]: 2025-04-30 03:30:14.854 [INFO][5632] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:30:14.856254 containerd[1694]: time="2025-04-30T03:30:14.855722959Z" level=info msg="TearDown network for sandbox \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\" successfully" Apr 30 03:30:14.856254 containerd[1694]: time="2025-04-30T03:30:14.855754260Z" level=info msg="StopPodSandbox for \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\" returns successfully" Apr 30 03:30:14.857182 containerd[1694]: time="2025-04-30T03:30:14.856749574Z" level=info msg="RemovePodSandbox for \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\"" Apr 30 03:30:14.857182 containerd[1694]: time="2025-04-30T03:30:14.856787074Z" level=info msg="Forcibly stopping sandbox \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\"" Apr 30 03:30:14.923206 containerd[1694]: 2025-04-30 03:30:14.894 [WARNING][5657] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0", GenerateName:"calico-apiserver-7f7c5c4897-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb6ecfc8-c4f5-4d10-b5a5-3ef2fe85ec1a", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c5c4897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"09cb5d416c9b6dfc125e92dc46d8ba0d5c5588824782271daec01706e970bc03", Pod:"calico-apiserver-7f7c5c4897-5rfmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7e222c13a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:14.923206 containerd[1694]: 2025-04-30 03:30:14.894 [INFO][5657] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:30:14.923206 containerd[1694]: 2025-04-30 03:30:14.894 [INFO][5657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" iface="eth0" netns="" Apr 30 03:30:14.923206 containerd[1694]: 2025-04-30 03:30:14.895 [INFO][5657] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:30:14.923206 containerd[1694]: 2025-04-30 03:30:14.895 [INFO][5657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:30:14.923206 containerd[1694]: 2025-04-30 03:30:14.913 [INFO][5664] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" HandleID="k8s-pod-network.8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:30:14.923206 containerd[1694]: 2025-04-30 03:30:14.914 [INFO][5664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:14.923206 containerd[1694]: 2025-04-30 03:30:14.914 [INFO][5664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:14.923206 containerd[1694]: 2025-04-30 03:30:14.919 [WARNING][5664] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" HandleID="k8s-pod-network.8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:30:14.923206 containerd[1694]: 2025-04-30 03:30:14.920 [INFO][5664] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" HandleID="k8s-pod-network.8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Workload="ci--4081.3.3--a--afe39379c7-k8s-calico--apiserver--7f7c5c4897--5rfmb-eth0" Apr 30 03:30:14.923206 containerd[1694]: 2025-04-30 03:30:14.921 [INFO][5664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:14.923206 containerd[1694]: 2025-04-30 03:30:14.922 [INFO][5657] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389" Apr 30 03:30:14.924023 containerd[1694]: time="2025-04-30T03:30:14.923153619Z" level=info msg="TearDown network for sandbox \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\" successfully" Apr 30 03:30:14.931111 containerd[1694]: time="2025-04-30T03:30:14.931067932Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:14.931232 containerd[1694]: time="2025-04-30T03:30:14.931135733Z" level=info msg="RemovePodSandbox \"8d941ad30fab4369d676277bd485523ce03b2068e6686f231968897e7d9fc389\" returns successfully" Apr 30 03:30:14.931982 containerd[1694]: time="2025-04-30T03:30:14.931714141Z" level=info msg="StopPodSandbox for \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\"" Apr 30 03:30:14.993311 containerd[1694]: 2025-04-30 03:30:14.964 [WARNING][5682] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"87eb0667-e570-47d8-8451-b9ca008ab0dd", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b", Pod:"coredns-6f6b679f8f-8kndr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali744bf05f77f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:14.993311 containerd[1694]: 2025-04-30 03:30:14.964 [INFO][5682] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:30:14.993311 containerd[1694]: 2025-04-30 03:30:14.964 [INFO][5682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" iface="eth0" netns="" Apr 30 03:30:14.993311 containerd[1694]: 2025-04-30 03:30:14.964 [INFO][5682] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:30:14.993311 containerd[1694]: 2025-04-30 03:30:14.964 [INFO][5682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:30:14.993311 containerd[1694]: 2025-04-30 03:30:14.984 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" HandleID="k8s-pod-network.993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:30:14.993311 containerd[1694]: 2025-04-30 03:30:14.984 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:14.993311 containerd[1694]: 2025-04-30 03:30:14.984 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:14.993311 containerd[1694]: 2025-04-30 03:30:14.989 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" HandleID="k8s-pod-network.993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:30:14.993311 containerd[1694]: 2025-04-30 03:30:14.989 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" HandleID="k8s-pod-network.993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:30:14.993311 containerd[1694]: 2025-04-30 03:30:14.991 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:14.993311 containerd[1694]: 2025-04-30 03:30:14.992 [INFO][5682] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:30:14.994160 containerd[1694]: time="2025-04-30T03:30:14.993348519Z" level=info msg="TearDown network for sandbox \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\" successfully" Apr 30 03:30:14.994160 containerd[1694]: time="2025-04-30T03:30:14.993377819Z" level=info msg="StopPodSandbox for \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\" returns successfully" Apr 30 03:30:14.994160 containerd[1694]: time="2025-04-30T03:30:14.994032729Z" level=info msg="RemovePodSandbox for \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\"" Apr 30 03:30:14.994160 containerd[1694]: time="2025-04-30T03:30:14.994065329Z" level=info msg="Forcibly stopping sandbox \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\"" Apr 30 03:30:15.058588 containerd[1694]: 2025-04-30 03:30:15.028 [WARNING][5707] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"87eb0667-e570-47d8-8451-b9ca008ab0dd", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"d233c51f26596f8b7a042af77d139338ffc7b3bb93608cb6086e100bb8efcd2b", Pod:"coredns-6f6b679f8f-8kndr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali744bf05f77f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:15.058588 containerd[1694]: 2025-04-30 03:30:15.029 [INFO][5707] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:30:15.058588 containerd[1694]: 2025-04-30 03:30:15.029 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" iface="eth0" netns="" Apr 30 03:30:15.058588 containerd[1694]: 2025-04-30 03:30:15.029 [INFO][5707] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:30:15.058588 containerd[1694]: 2025-04-30 03:30:15.029 [INFO][5707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:30:15.058588 containerd[1694]: 2025-04-30 03:30:15.047 [INFO][5714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" HandleID="k8s-pod-network.993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:30:15.058588 containerd[1694]: 2025-04-30 03:30:15.047 [INFO][5714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:15.058588 containerd[1694]: 2025-04-30 03:30:15.048 [INFO][5714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:15.058588 containerd[1694]: 2025-04-30 03:30:15.054 [WARNING][5714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" HandleID="k8s-pod-network.993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:30:15.058588 containerd[1694]: 2025-04-30 03:30:15.054 [INFO][5714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" HandleID="k8s-pod-network.993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--8kndr-eth0" Apr 30 03:30:15.058588 containerd[1694]: 2025-04-30 03:30:15.056 [INFO][5714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:15.058588 containerd[1694]: 2025-04-30 03:30:15.057 [INFO][5707] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627" Apr 30 03:30:15.059455 containerd[1694]: time="2025-04-30T03:30:15.058640449Z" level=info msg="TearDown network for sandbox \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\" successfully" Apr 30 03:30:15.067301 containerd[1694]: time="2025-04-30T03:30:15.067151170Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:15.067301 containerd[1694]: time="2025-04-30T03:30:15.067255671Z" level=info msg="RemovePodSandbox \"993217c2b5094d8206fe0f6b970b415eba850e60fc908061728bf28617299627\" returns successfully" Apr 30 03:30:15.067880 containerd[1694]: time="2025-04-30T03:30:15.067771279Z" level=info msg="StopPodSandbox for \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\"" Apr 30 03:30:15.134707 containerd[1694]: 2025-04-30 03:30:15.102 [WARNING][5732] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba", Pod:"coredns-6f6b679f8f-bkvhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali618cf6a0ae6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:15.134707 containerd[1694]: 2025-04-30 03:30:15.103 [INFO][5732] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:30:15.134707 containerd[1694]: 2025-04-30 03:30:15.103 [INFO][5732] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" iface="eth0" netns="" Apr 30 03:30:15.134707 containerd[1694]: 2025-04-30 03:30:15.103 [INFO][5732] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:30:15.134707 containerd[1694]: 2025-04-30 03:30:15.103 [INFO][5732] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:30:15.134707 containerd[1694]: 2025-04-30 03:30:15.124 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" HandleID="k8s-pod-network.af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:30:15.134707 containerd[1694]: 2025-04-30 03:30:15.124 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:15.134707 containerd[1694]: 2025-04-30 03:30:15.124 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:15.134707 containerd[1694]: 2025-04-30 03:30:15.130 [WARNING][5740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" HandleID="k8s-pod-network.af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:30:15.134707 containerd[1694]: 2025-04-30 03:30:15.130 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" HandleID="k8s-pod-network.af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:30:15.134707 containerd[1694]: 2025-04-30 03:30:15.132 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:15.134707 containerd[1694]: 2025-04-30 03:30:15.133 [INFO][5732] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:30:15.135820 containerd[1694]: time="2025-04-30T03:30:15.134770633Z" level=info msg="TearDown network for sandbox \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\" successfully" Apr 30 03:30:15.135820 containerd[1694]: time="2025-04-30T03:30:15.134803533Z" level=info msg="StopPodSandbox for \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\" returns successfully" Apr 30 03:30:15.135820 containerd[1694]: time="2025-04-30T03:30:15.135633245Z" level=info msg="RemovePodSandbox for \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\"" Apr 30 03:30:15.135820 containerd[1694]: time="2025-04-30T03:30:15.135668545Z" level=info msg="Forcibly stopping sandbox \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\"" Apr 30 03:30:15.220590 containerd[1694]: 2025-04-30 03:30:15.191 [WARNING][5758] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6c056dcd-a6ce-4fc1-ac1b-afb8a60d893a", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-afe39379c7", ContainerID:"c5d2e7f7063eae293a8276fc39ea95c510e5d63ad94022683ce282e05cecd7ba", Pod:"coredns-6f6b679f8f-bkvhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali618cf6a0ae6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:15.220590 containerd[1694]: 2025-04-30 03:30:15.192 [INFO][5758] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:30:15.220590 containerd[1694]: 2025-04-30 03:30:15.192 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" iface="eth0" netns="" Apr 30 03:30:15.220590 containerd[1694]: 2025-04-30 03:30:15.192 [INFO][5758] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:30:15.220590 containerd[1694]: 2025-04-30 03:30:15.192 [INFO][5758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:30:15.220590 containerd[1694]: 2025-04-30 03:30:15.211 [INFO][5765] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" HandleID="k8s-pod-network.af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:30:15.220590 containerd[1694]: 2025-04-30 03:30:15.211 [INFO][5765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:15.220590 containerd[1694]: 2025-04-30 03:30:15.211 [INFO][5765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:15.220590 containerd[1694]: 2025-04-30 03:30:15.216 [WARNING][5765] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" HandleID="k8s-pod-network.af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:30:15.220590 containerd[1694]: 2025-04-30 03:30:15.217 [INFO][5765] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" HandleID="k8s-pod-network.af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Workload="ci--4081.3.3--a--afe39379c7-k8s-coredns--6f6b679f8f--bkvhq-eth0" Apr 30 03:30:15.220590 containerd[1694]: 2025-04-30 03:30:15.218 [INFO][5765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:15.220590 containerd[1694]: 2025-04-30 03:30:15.219 [INFO][5758] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad" Apr 30 03:30:15.221277 containerd[1694]: time="2025-04-30T03:30:15.220560054Z" level=info msg="TearDown network for sandbox \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\" successfully" Apr 30 03:30:15.230590 containerd[1694]: time="2025-04-30T03:30:15.230538696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:15.230775 containerd[1694]: time="2025-04-30T03:30:15.230605397Z" level=info msg="RemovePodSandbox \"af731d775bbebdd8e7446afea473f4f90abd18323e5e60af4cc4a92ac510d8ad\" returns successfully" Apr 30 03:30:19.481222 kubelet[3113]: I0430 03:30:19.481148 3113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:54.251576 systemd[1]: Started sshd@7-10.200.8.38:22-10.200.16.10:53240.service - OpenSSH per-connection server daemon (10.200.16.10:53240). Apr 30 03:30:54.883950 sshd[5853]: Accepted publickey for core from 10.200.16.10 port 53240 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:30:54.885508 sshd[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:54.890225 systemd-logind[1678]: New session 10 of user core. Apr 30 03:30:54.893346 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:30:55.392711 sshd[5853]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:55.397609 systemd[1]: sshd@7-10.200.8.38:22-10.200.16.10:53240.service: Deactivated successfully. Apr 30 03:30:55.403568 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:30:55.405300 systemd-logind[1678]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:30:55.407196 systemd-logind[1678]: Removed session 10. Apr 30 03:31:00.507558 systemd[1]: Started sshd@8-10.200.8.38:22-10.200.16.10:50180.service - OpenSSH per-connection server daemon (10.200.16.10:50180). Apr 30 03:31:01.126244 sshd[5904]: Accepted publickey for core from 10.200.16.10 port 50180 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:01.127965 sshd[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:01.133033 systemd-logind[1678]: New session 11 of user core. Apr 30 03:31:01.142367 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:31:01.629057 sshd[5904]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:01.633441 systemd-logind[1678]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:31:01.634450 systemd[1]: sshd@8-10.200.8.38:22-10.200.16.10:50180.service: Deactivated successfully. Apr 30 03:31:01.636659 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:31:01.637663 systemd-logind[1678]: Removed session 11. Apr 30 03:31:06.743616 systemd[1]: Started sshd@9-10.200.8.38:22-10.200.16.10:50196.service - OpenSSH per-connection server daemon (10.200.16.10:50196). Apr 30 03:31:07.371262 sshd[5918]: Accepted publickey for core from 10.200.16.10 port 50196 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:07.372769 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:07.379031 systemd-logind[1678]: New session 12 of user core. Apr 30 03:31:07.384448 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:31:07.878744 sshd[5918]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:07.884759 systemd-logind[1678]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:31:07.885781 systemd[1]: sshd@9-10.200.8.38:22-10.200.16.10:50196.service: Deactivated successfully. Apr 30 03:31:07.888365 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:31:07.890577 systemd-logind[1678]: Removed session 12. Apr 30 03:31:12.989317 systemd[1]: Started sshd@10-10.200.8.38:22-10.200.16.10:59418.service - OpenSSH per-connection server daemon (10.200.16.10:59418). Apr 30 03:31:13.613309 sshd[5938]: Accepted publickey for core from 10.200.16.10 port 59418 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:13.615094 sshd[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:13.620719 systemd-logind[1678]: New session 13 of user core. Apr 30 03:31:13.625358 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:31:14.111350 sshd[5938]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:14.115939 systemd[1]: sshd@10-10.200.8.38:22-10.200.16.10:59418.service: Deactivated successfully. Apr 30 03:31:14.118591 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:31:14.119642 systemd-logind[1678]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:31:14.120774 systemd-logind[1678]: Removed session 13. Apr 30 03:31:14.229547 systemd[1]: Started sshd@11-10.200.8.38:22-10.200.16.10:59434.service - OpenSSH per-connection server daemon (10.200.16.10:59434). Apr 30 03:31:14.860103 sshd[5952]: Accepted publickey for core from 10.200.16.10 port 59434 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:14.861693 sshd[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:14.866430 systemd-logind[1678]: New session 14 of user core. Apr 30 03:31:14.872376 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:31:15.398874 sshd[5952]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:15.402944 systemd[1]: sshd@11-10.200.8.38:22-10.200.16.10:59434.service: Deactivated successfully. Apr 30 03:31:15.405066 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:31:15.405981 systemd-logind[1678]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:31:15.407097 systemd-logind[1678]: Removed session 14. Apr 30 03:31:15.516517 systemd[1]: Started sshd@12-10.200.8.38:22-10.200.16.10:59450.service - OpenSSH per-connection server daemon (10.200.16.10:59450). Apr 30 03:31:16.136763 sshd[5965]: Accepted publickey for core from 10.200.16.10 port 59450 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:16.138671 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:16.143838 systemd-logind[1678]: New session 15 of user core. Apr 30 03:31:16.149365 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:31:16.639444 sshd[5965]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:16.643379 systemd[1]: sshd@12-10.200.8.38:22-10.200.16.10:59450.service: Deactivated successfully. Apr 30 03:31:16.645749 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:31:16.646818 systemd-logind[1678]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:31:16.647796 systemd-logind[1678]: Removed session 15. Apr 30 03:31:18.715977 systemd[1]: run-containerd-runc-k8s.io-24eb1615c9c1237ff80d82c6c938f59f9f39a93b2c7140b687ad7b07efc91afe-runc.zDMui1.mount: Deactivated successfully. Apr 30 03:31:19.219923 update_engine[1680]: I20250430 03:31:19.219851 1680 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 03:31:19.219923 update_engine[1680]: I20250430 03:31:19.219928 1680 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 03:31:19.220516 update_engine[1680]: I20250430 03:31:19.220165 1680 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 03:31:19.221338 update_engine[1680]: I20250430 03:31:19.220882 1680 omaha_request_params.cc:62] Current group set to lts Apr 30 03:31:19.221338 update_engine[1680]: I20250430 03:31:19.221055 1680 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 03:31:19.221338 update_engine[1680]: I20250430 03:31:19.221072 1680 update_attempter.cc:643] Scheduling an action processor start. Apr 30 03:31:19.221338 update_engine[1680]: I20250430 03:31:19.221096 1680 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 03:31:19.221338 update_engine[1680]: I20250430 03:31:19.221136 1680 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 03:31:19.221338 update_engine[1680]: I20250430 03:31:19.221243 1680 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 03:31:19.221338 update_engine[1680]: I20250430 03:31:19.221256 1680 omaha_request_action.cc:272] Request: Apr 30 03:31:19.221338 update_engine[1680]: Apr 30 03:31:19.221338 update_engine[1680]: Apr 30 03:31:19.221338 update_engine[1680]: Apr 30 03:31:19.221338 update_engine[1680]: Apr 30 03:31:19.221338 update_engine[1680]: Apr 30 03:31:19.221338 update_engine[1680]: Apr 30 03:31:19.221338 update_engine[1680]: Apr 30 03:31:19.221338 update_engine[1680]: Apr 30 03:31:19.221338 update_engine[1680]: I20250430 03:31:19.221266 1680 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:31:19.222357 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 03:31:19.223432 update_engine[1680]: I20250430 03:31:19.223397 1680 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:31:19.223793 update_engine[1680]: I20250430 03:31:19.223755 1680 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:31:19.253662 update_engine[1680]: E20250430 03:31:19.253598 1680 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:31:19.253789 update_engine[1680]: I20250430 03:31:19.253707 1680 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 03:31:21.751343 systemd[1]: Started sshd@13-10.200.8.38:22-10.200.16.10:38000.service - OpenSSH per-connection server daemon (10.200.16.10:38000). Apr 30 03:31:22.373603 sshd[6005]: Accepted publickey for core from 10.200.16.10 port 38000 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:22.375081 sshd[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:22.379768 systemd-logind[1678]: New session 16 of user core. Apr 30 03:31:22.387349 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:31:22.881639 sshd[6005]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:22.886711 systemd-logind[1678]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:31:22.887480 systemd[1]: sshd@13-10.200.8.38:22-10.200.16.10:38000.service: Deactivated successfully. Apr 30 03:31:22.891971 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:31:22.894076 systemd-logind[1678]: Removed session 16. Apr 30 03:31:27.996494 systemd[1]: Started sshd@14-10.200.8.38:22-10.200.16.10:38004.service - OpenSSH per-connection server daemon (10.200.16.10:38004). Apr 30 03:31:28.628720 sshd[6037]: Accepted publickey for core from 10.200.16.10 port 38004 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:28.630471 sshd[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:28.635143 systemd-logind[1678]: New session 17 of user core. Apr 30 03:31:28.639341 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:31:29.132922 sshd[6037]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:29.137370 systemd[1]: sshd@14-10.200.8.38:22-10.200.16.10:38004.service: Deactivated successfully. Apr 30 03:31:29.139837 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:31:29.140827 systemd-logind[1678]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:31:29.141745 systemd-logind[1678]: Removed session 17. Apr 30 03:31:29.218148 update_engine[1680]: I20250430 03:31:29.218055 1680 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:31:29.218698 update_engine[1680]: I20250430 03:31:29.218443 1680 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:31:29.218897 update_engine[1680]: I20250430 03:31:29.218775 1680 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:31:29.241021 update_engine[1680]: E20250430 03:31:29.240962 1680 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:31:29.241146 update_engine[1680]: I20250430 03:31:29.241051 1680 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 03:31:34.244259 systemd[1]: Started sshd@15-10.200.8.38:22-10.200.16.10:46378.service - OpenSSH per-connection server daemon (10.200.16.10:46378). Apr 30 03:31:34.872330 sshd[6068]: Accepted publickey for core from 10.200.16.10 port 46378 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:34.873641 sshd[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:34.879794 systemd-logind[1678]: New session 18 of user core. Apr 30 03:31:34.882376 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:31:35.375156 sshd[6068]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:35.379130 systemd[1]: sshd@15-10.200.8.38:22-10.200.16.10:46378.service: Deactivated successfully. Apr 30 03:31:35.381559 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:31:35.382659 systemd-logind[1678]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:31:35.383764 systemd-logind[1678]: Removed session 18. Apr 30 03:31:35.491542 systemd[1]: Started sshd@16-10.200.8.38:22-10.200.16.10:46390.service - OpenSSH per-connection server daemon (10.200.16.10:46390). Apr 30 03:31:36.109955 sshd[6081]: Accepted publickey for core from 10.200.16.10 port 46390 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:36.111466 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:36.116101 systemd-logind[1678]: New session 19 of user core. Apr 30 03:31:36.123359 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:31:36.907860 sshd[6081]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:36.911701 systemd[1]: sshd@16-10.200.8.38:22-10.200.16.10:46390.service: Deactivated successfully. Apr 30 03:31:36.914450 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:31:36.916315 systemd-logind[1678]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:31:36.917367 systemd-logind[1678]: Removed session 19. Apr 30 03:31:37.023929 systemd[1]: Started sshd@17-10.200.8.38:22-10.200.16.10:46406.service - OpenSSH per-connection server daemon (10.200.16.10:46406). Apr 30 03:31:37.642544 sshd[6092]: Accepted publickey for core from 10.200.16.10 port 46406 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:37.644388 sshd[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:37.650078 systemd-logind[1678]: New session 20 of user core. Apr 30 03:31:37.656420 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:31:39.219223 update_engine[1680]: I20250430 03:31:39.219030 1680 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:31:39.220162 update_engine[1680]: I20250430 03:31:39.219841 1680 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:31:39.220162 update_engine[1680]: I20250430 03:31:39.220114 1680 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:31:39.247234 update_engine[1680]: E20250430 03:31:39.247134 1680 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:31:39.247453 update_engine[1680]: I20250430 03:31:39.247416 1680 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 03:31:40.174154 sshd[6092]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:40.178937 systemd[1]: sshd@17-10.200.8.38:22-10.200.16.10:46406.service: Deactivated successfully. Apr 30 03:31:40.181525 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:31:40.182338 systemd-logind[1678]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:31:40.183385 systemd-logind[1678]: Removed session 20. Apr 30 03:31:40.287518 systemd[1]: Started sshd@18-10.200.8.38:22-10.200.16.10:60414.service - OpenSSH per-connection server daemon (10.200.16.10:60414). Apr 30 03:31:40.918820 sshd[6110]: Accepted publickey for core from 10.200.16.10 port 60414 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:40.920455 sshd[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:40.925038 systemd-logind[1678]: New session 21 of user core. Apr 30 03:31:40.930346 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:31:41.653794 sshd[6110]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:41.659910 systemd[1]: sshd@18-10.200.8.38:22-10.200.16.10:60414.service: Deactivated successfully. Apr 30 03:31:41.662730 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:31:41.665607 systemd-logind[1678]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:31:41.667268 systemd-logind[1678]: Removed session 21. Apr 30 03:31:41.769573 systemd[1]: Started sshd@19-10.200.8.38:22-10.200.16.10:60422.service - OpenSSH per-connection server daemon (10.200.16.10:60422). Apr 30 03:31:42.417524 sshd[6121]: Accepted publickey for core from 10.200.16.10 port 60422 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:42.419244 sshd[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:42.423926 systemd-logind[1678]: New session 22 of user core. Apr 30 03:31:42.427363 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:31:42.912994 sshd[6121]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:42.917649 systemd[1]: sshd@19-10.200.8.38:22-10.200.16.10:60422.service: Deactivated successfully. Apr 30 03:31:42.920265 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:31:42.920972 systemd-logind[1678]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:31:42.922461 systemd-logind[1678]: Removed session 22. Apr 30 03:31:48.027535 systemd[1]: Started sshd@20-10.200.8.38:22-10.200.16.10:60434.service - OpenSSH per-connection server daemon (10.200.16.10:60434). Apr 30 03:31:48.647718 sshd[6137]: Accepted publickey for core from 10.200.16.10 port 60434 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:48.649545 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:48.654527 systemd-logind[1678]: New session 23 of user core. Apr 30 03:31:48.660327 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:31:48.716882 systemd[1]: run-containerd-runc-k8s.io-24eb1615c9c1237ff80d82c6c938f59f9f39a93b2c7140b687ad7b07efc91afe-runc.8wDG5H.mount: Deactivated successfully. Apr 30 03:31:49.145821 sshd[6137]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:49.149235 systemd[1]: sshd@20-10.200.8.38:22-10.200.16.10:60434.service: Deactivated successfully. Apr 30 03:31:49.151868 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:31:49.153964 systemd-logind[1678]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:31:49.155627 systemd-logind[1678]: Removed session 23. Apr 30 03:31:49.217422 update_engine[1680]: I20250430 03:31:49.217319 1680 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:31:49.218005 update_engine[1680]: I20250430 03:31:49.217669 1680 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:31:49.218074 update_engine[1680]: I20250430 03:31:49.218003 1680 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:31:49.236938 update_engine[1680]: E20250430 03:31:49.236877 1680 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:31:49.237101 update_engine[1680]: I20250430 03:31:49.236958 1680 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 03:31:49.237101 update_engine[1680]: I20250430 03:31:49.236971 1680 omaha_request_action.cc:617] Omaha request response: Apr 30 03:31:49.237101 update_engine[1680]: E20250430 03:31:49.237058 1680 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 03:31:49.237101 update_engine[1680]: I20250430 03:31:49.237084 1680 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 03:31:49.237101 update_engine[1680]: I20250430 03:31:49.237092 1680 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:31:49.237101 update_engine[1680]: I20250430 03:31:49.237098 1680 update_attempter.cc:306] Processing Done. Apr 30 03:31:49.237338 update_engine[1680]: E20250430 03:31:49.237116 1680 update_attempter.cc:619] Update failed. Apr 30 03:31:49.237338 update_engine[1680]: I20250430 03:31:49.237124 1680 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 03:31:49.237338 update_engine[1680]: I20250430 03:31:49.237133 1680 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 03:31:49.237338 update_engine[1680]: I20250430 03:31:49.237140 1680 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 03:31:49.237338 update_engine[1680]: I20250430 03:31:49.237254 1680 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 03:31:49.237338 update_engine[1680]: I20250430 03:31:49.237284 1680 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 03:31:49.237338 update_engine[1680]: I20250430 03:31:49.237294 1680 omaha_request_action.cc:272] Request: Apr 30 03:31:49.237338 update_engine[1680]: Apr 30 03:31:49.237338 update_engine[1680]: Apr 30 03:31:49.237338 update_engine[1680]: Apr 30 03:31:49.237338 update_engine[1680]: Apr 30 03:31:49.237338 update_engine[1680]: Apr 30 03:31:49.237338 update_engine[1680]: Apr 30 03:31:49.237338 update_engine[1680]: I20250430 03:31:49.237303 1680 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:31:49.237825 update_engine[1680]: I20250430 03:31:49.237504 1680 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:31:49.237825 update_engine[1680]: I20250430 03:31:49.237746 1680 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:31:49.238076 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 03:31:49.253512 update_engine[1680]: E20250430 03:31:49.253461 1680 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:31:49.253599 update_engine[1680]: I20250430 03:31:49.253525 1680 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 03:31:49.253599 update_engine[1680]: I20250430 03:31:49.253536 1680 omaha_request_action.cc:617] Omaha request response: Apr 30 03:31:49.253599 update_engine[1680]: I20250430 03:31:49.253546 1680 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:31:49.253599 update_engine[1680]: I20250430 03:31:49.253553 1680 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:31:49.253599 update_engine[1680]: I20250430 03:31:49.253561 1680 update_attempter.cc:306] Processing Done. Apr 30 03:31:49.253599 update_engine[1680]: I20250430 03:31:49.253568 1680 update_attempter.cc:310] Error event sent. Apr 30 03:31:49.253599 update_engine[1680]: I20250430 03:31:49.253579 1680 update_check_scheduler.cc:74] Next update check in 43m32s Apr 30 03:31:49.253964 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 03:31:54.258374 systemd[1]: Started sshd@21-10.200.8.38:22-10.200.16.10:44666.service - OpenSSH per-connection server daemon (10.200.16.10:44666). Apr 30 03:31:54.884451 sshd[6174]: Accepted publickey for core from 10.200.16.10 port 44666 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:31:54.885050 sshd[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:54.889718 systemd-logind[1678]: New session 24 of user core. Apr 30 03:31:54.892366 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:31:55.382429 sshd[6174]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:55.385980 systemd[1]: sshd@21-10.200.8.38:22-10.200.16.10:44666.service: Deactivated successfully. Apr 30 03:31:55.388469 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:31:55.390362 systemd-logind[1678]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:31:55.391697 systemd-logind[1678]: Removed session 24. Apr 30 03:31:59.675997 systemd[1]: run-containerd-runc-k8s.io-99411712d1b4aaa44cb0a4b9e2e6326c6e655c9bade85be7609b4ceada26baa5-runc.78lZbn.mount: Deactivated successfully. Apr 30 03:32:00.496480 systemd[1]: Started sshd@22-10.200.8.38:22-10.200.16.10:50198.service - OpenSSH per-connection server daemon (10.200.16.10:50198). Apr 30 03:32:01.115275 sshd[6226]: Accepted publickey for core from 10.200.16.10 port 50198 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:32:01.117078 sshd[6226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:01.122339 systemd-logind[1678]: New session 25 of user core. Apr 30 03:32:01.128039 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:32:01.613485 sshd[6226]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:01.617030 systemd[1]: sshd@22-10.200.8.38:22-10.200.16.10:50198.service: Deactivated successfully. Apr 30 03:32:01.619740 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:32:01.621863 systemd-logind[1678]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:32:01.623288 systemd-logind[1678]: Removed session 25. Apr 30 03:32:06.725569 systemd[1]: Started sshd@23-10.200.8.38:22-10.200.16.10:50208.service - OpenSSH per-connection server daemon (10.200.16.10:50208). Apr 30 03:32:07.354462 sshd[6239]: Accepted publickey for core from 10.200.16.10 port 50208 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:32:07.356260 sshd[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:07.361588 systemd-logind[1678]: New session 26 of user core. Apr 30 03:32:07.367348 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:32:07.858244 sshd[6239]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:07.862392 systemd[1]: sshd@23-10.200.8.38:22-10.200.16.10:50208.service: Deactivated successfully. Apr 30 03:32:07.864485 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:32:07.865257 systemd-logind[1678]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:32:07.866155 systemd-logind[1678]: Removed session 26. Apr 30 03:32:12.970585 systemd[1]: Started sshd@24-10.200.8.38:22-10.200.16.10:44226.service - OpenSSH per-connection server daemon (10.200.16.10:44226). Apr 30 03:32:13.597196 sshd[6252]: Accepted publickey for core from 10.200.16.10 port 44226 ssh2: RSA SHA256:OLsKYULe32LLXVoJxvvXWTNZjsaTPeeI6IR+UHWIufc Apr 30 03:32:13.598819 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:13.603884 systemd-logind[1678]: New session 27 of user core. Apr 30 03:32:13.608379 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 03:32:14.101141 sshd[6252]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:14.106423 systemd[1]: sshd@24-10.200.8.38:22-10.200.16.10:44226.service: Deactivated successfully. Apr 30 03:32:14.109281 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 03:32:14.110082 systemd-logind[1678]: Session 27 logged out. Waiting for processes to exit. Apr 30 03:32:14.111132 systemd-logind[1678]: Removed session 27.