Jul 7 06:15:10.020728 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:15:10.020762 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:15:10.020774 kernel: BIOS-provided physical RAM map: Jul 7 06:15:10.020782 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 06:15:10.020789 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 7 06:15:10.020796 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jul 7 06:15:10.020805 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jul 7 06:15:10.020815 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jul 7 06:15:10.020822 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jul 7 06:15:10.020829 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 7 06:15:10.020849 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 7 06:15:10.020857 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 7 06:15:10.020864 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 7 06:15:10.020872 kernel: printk: legacy bootconsole [earlyser0] enabled Jul 7 06:15:10.020883 kernel: NX (Execute Disable) protection: active Jul 7 06:15:10.020891 kernel: APIC: Static calls initialized Jul 7 06:15:10.020898 kernel: efi: EFI v2.7 by Microsoft Jul 7 06:15:10.020907 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eab5518 RNG=0x3ffd2018 Jul 7 06:15:10.020915 kernel: random: crng init done Jul 7 06:15:10.020923 kernel: secureboot: Secure boot disabled Jul 7 06:15:10.020931 kernel: SMBIOS 3.1.0 present. Jul 7 06:15:10.020938 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jul 7 06:15:10.020946 kernel: DMI: Memory slots populated: 2/2 Jul 7 06:15:10.020955 kernel: Hypervisor detected: Microsoft Hyper-V Jul 7 06:15:10.020963 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jul 7 06:15:10.020971 kernel: Hyper-V: Nested features: 0x3e0101 Jul 7 06:15:10.020978 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 7 06:15:10.020986 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 7 06:15:10.020994 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 7 06:15:10.021002 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 7 06:15:10.021010 kernel: tsc: Detected 2300.000 MHz processor Jul 7 06:15:10.021017 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:15:10.021026 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:15:10.021036 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jul 7 06:15:10.021044 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 06:15:10.021053 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:15:10.021061 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jul 7 06:15:10.021069 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jul 7 06:15:10.021077 kernel: Using GB pages for direct mapping Jul 7 06:15:10.021085 kernel: ACPI: Early table checksum verification disabled Jul 7 06:15:10.021097 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 7 06:15:10.021107 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:15:10.021115 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:15:10.021124 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 7 06:15:10.021132 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 7 06:15:10.021141 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:15:10.021149 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:15:10.021160 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:15:10.021168 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 7 06:15:10.021177 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 7 06:15:10.021185 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:15:10.021194 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 7 06:15:10.021203 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jul 7 06:15:10.021211 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 7 06:15:10.021220 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 7 06:15:10.021228 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 7 06:15:10.021238 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 7 06:15:10.021246 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jul 7 06:15:10.021255 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jul 7 06:15:10.021263 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 7 06:15:10.021272 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 7 06:15:10.021281 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jul 7 06:15:10.021289 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jul 7 06:15:10.021299 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jul 7 06:15:10.021307 kernel: Zone ranges: Jul 7 06:15:10.021317 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:15:10.021326 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 06:15:10.021334 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 7 06:15:10.021343 kernel: Device empty Jul 7 06:15:10.021351 kernel: Movable zone start for each node Jul 7 06:15:10.021359 kernel: Early memory node ranges Jul 7 06:15:10.021368 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 7 06:15:10.021376 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jul 7 06:15:10.021385 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jul 7 06:15:10.021395 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 7 06:15:10.021403 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 7 06:15:10.021411 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 7 06:15:10.021420 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:15:10.021428 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 7 06:15:10.021437 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 7 06:15:10.021445 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jul 7 06:15:10.021454 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 7 06:15:10.021462 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 06:15:10.021472 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:15:10.021481 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:15:10.021489 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 7 06:15:10.021498 kernel: TSC deadline timer available Jul 7 06:15:10.021506 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:15:10.021515 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:15:10.021523 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:15:10.021531 kernel: CPU topo: Max. threads per core: 2 Jul 7 06:15:10.021540 kernel: CPU topo: Num. cores per package: 1 Jul 7 06:15:10.021550 kernel: CPU topo: Num. threads per package: 2 Jul 7 06:15:10.021558 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 06:15:10.021567 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 7 06:15:10.021575 kernel: Booting paravirtualized kernel on Hyper-V Jul 7 06:15:10.021584 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:15:10.021592 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 06:15:10.021601 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 06:15:10.021609 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 06:15:10.021618 kernel: pcpu-alloc: [0] 0 1 Jul 7 06:15:10.021627 kernel: Hyper-V: PV spinlocks enabled Jul 7 06:15:10.021636 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 06:15:10.021647 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:15:10.021656 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:15:10.021664 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 7 06:15:10.021673 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:15:10.021681 kernel: Fallback order for Node 0: 0 Jul 7 06:15:10.021690 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jul 7 06:15:10.021700 kernel: Policy zone: Normal Jul 7 06:15:10.021708 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:15:10.021717 kernel: software IO TLB: area num 2. Jul 7 06:15:10.021725 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 06:15:10.021734 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:15:10.021742 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:15:10.021750 kernel: Dynamic Preempt: voluntary Jul 7 06:15:10.021759 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:15:10.021769 kernel: rcu: RCU event tracing is enabled. Jul 7 06:15:10.021785 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 06:15:10.021794 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:15:10.021803 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:15:10.021814 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:15:10.021823 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:15:10.021832 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 06:15:10.022919 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:15:10.022934 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:15:10.022944 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:15:10.022954 kernel: Using NULL legacy PIC Jul 7 06:15:10.022967 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 7 06:15:10.022976 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:15:10.022986 kernel: Console: colour dummy device 80x25 Jul 7 06:15:10.022995 kernel: printk: legacy console [tty1] enabled Jul 7 06:15:10.023005 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:15:10.023014 kernel: printk: legacy bootconsole [earlyser0] disabled Jul 7 06:15:10.023023 kernel: ACPI: Core revision 20240827 Jul 7 06:15:10.023034 kernel: Failed to register legacy timer interrupt Jul 7 06:15:10.023042 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:15:10.023052 kernel: x2apic enabled Jul 7 06:15:10.023062 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:15:10.023071 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 7 06:15:10.023080 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 7 06:15:10.023090 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jul 7 06:15:10.023100 kernel: Hyper-V: Using IPI hypercalls Jul 7 06:15:10.023109 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 7 06:15:10.023120 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 7 06:15:10.023130 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 7 06:15:10.023139 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 7 06:15:10.023148 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 7 06:15:10.023157 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 7 06:15:10.023167 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 7 06:15:10.023176 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jul 7 06:15:10.023185 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 06:15:10.023196 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 06:15:10.023206 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 06:15:10.023215 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:15:10.023224 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:15:10.023233 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:15:10.023243 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 7 06:15:10.023253 kernel: RETBleed: Vulnerable Jul 7 06:15:10.023262 kernel: Speculative Store Bypass: Vulnerable Jul 7 06:15:10.023271 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 06:15:10.023280 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:15:10.023289 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:15:10.023300 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:15:10.023309 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 7 06:15:10.023318 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 7 06:15:10.023327 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 7 06:15:10.023336 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jul 7 06:15:10.023345 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jul 7 06:15:10.023354 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jul 7 06:15:10.023363 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:15:10.023372 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 7 06:15:10.023382 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 7 06:15:10.023391 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 7 06:15:10.023401 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jul 7 06:15:10.023410 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jul 7 06:15:10.023419 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jul 7 06:15:10.023428 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jul 7 06:15:10.023438 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:15:10.023446 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:15:10.023455 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:15:10.023463 kernel: landlock: Up and running. Jul 7 06:15:10.023472 kernel: SELinux: Initializing. Jul 7 06:15:10.023481 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 06:15:10.023490 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 06:15:10.023500 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jul 7 06:15:10.023511 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jul 7 06:15:10.023520 kernel: signal: max sigframe size: 11952 Jul 7 06:15:10.023529 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:15:10.023539 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:15:10.023549 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:15:10.023558 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 06:15:10.023567 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:15:10.023577 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:15:10.023586 kernel: .... node #0, CPUs: #1 Jul 7 06:15:10.023596 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 06:15:10.023606 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jul 7 06:15:10.023616 kernel: Memory: 8077020K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 299992K reserved, 0K cma-reserved) Jul 7 06:15:10.023626 kernel: devtmpfs: initialized Jul 7 06:15:10.023635 kernel: x86/mm: Memory block size: 128MB Jul 7 06:15:10.023644 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 7 06:15:10.023655 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:15:10.023664 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 06:15:10.023673 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:15:10.023684 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:15:10.023693 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:15:10.023702 kernel: audit: type=2000 audit(1751868907.029:1): state=initialized audit_enabled=0 res=1 Jul 7 06:15:10.023712 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:15:10.023721 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:15:10.023730 kernel: cpuidle: using governor menu Jul 7 06:15:10.023739 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:15:10.023749 kernel: dca service started, version 1.12.1 Jul 7 06:15:10.023758 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jul 7 06:15:10.023769 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jul 7 06:15:10.023778 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:15:10.023787 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:15:10.023796 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:15:10.023806 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:15:10.023815 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:15:10.023824 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:15:10.023833 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:15:10.023856 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:15:10.023868 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:15:10.023876 kernel: ACPI: Interpreter enabled Jul 7 06:15:10.023884 kernel: ACPI: PM: (supports S0 S5) Jul 7 06:15:10.023892 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:15:10.023899 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:15:10.023906 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 7 06:15:10.026299 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 7 06:15:10.026311 kernel: iommu: Default domain type: Translated Jul 7 06:15:10.026321 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:15:10.026334 kernel: efivars: Registered efivars operations Jul 7 06:15:10.026343 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:15:10.026353 kernel: PCI: System does not support PCI Jul 7 06:15:10.026362 kernel: vgaarb: loaded Jul 7 06:15:10.026372 kernel: clocksource: Switched to clocksource tsc-early Jul 7 06:15:10.026381 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:15:10.026391 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:15:10.026400 kernel: pnp: PnP ACPI init Jul 7 06:15:10.026409 kernel: pnp: PnP ACPI: found 3 devices Jul 7 06:15:10.026421 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:15:10.026430 kernel: NET: Registered PF_INET protocol family Jul 7 06:15:10.026439 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 06:15:10.026448 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 7 06:15:10.026457 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:15:10.026466 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:15:10.026475 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 06:15:10.026484 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 7 06:15:10.026495 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 7 06:15:10.026504 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 7 06:15:10.026513 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:15:10.026522 kernel: NET: Registered PF_XDP protocol family Jul 7 06:15:10.026531 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:15:10.026540 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 06:15:10.026549 kernel: software IO TLB: mapped [mem 0x000000003a9c6000-0x000000003e9c6000] (64MB) Jul 7 06:15:10.026558 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jul 7 06:15:10.026567 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jul 7 06:15:10.026578 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 7 06:15:10.026587 kernel: clocksource: Switched to clocksource tsc Jul 7 06:15:10.026596 kernel: Initialise system trusted keyrings Jul 7 06:15:10.026605 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 7 06:15:10.026614 kernel: Key type asymmetric registered Jul 7 06:15:10.026623 kernel: Asymmetric key parser 'x509' registered Jul 7 06:15:10.026632 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:15:10.026641 kernel: io scheduler mq-deadline registered Jul 7 06:15:10.026650 kernel: io scheduler kyber registered Jul 7 06:15:10.026660 kernel: io scheduler bfq registered Jul 7 06:15:10.026669 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:15:10.026678 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:15:10.026687 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:15:10.026696 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 7 06:15:10.026705 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:15:10.026714 kernel: i8042: PNP: No PS/2 controller found. Jul 7 06:15:10.026882 kernel: rtc_cmos 00:02: registered as rtc0 Jul 7 06:15:10.026959 kernel: rtc_cmos 00:02: setting system clock to 2025-07-07T06:15:09 UTC (1751868909) Jul 7 06:15:10.027029 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 7 06:15:10.027040 kernel: intel_pstate: Intel P-state driver initializing Jul 7 06:15:10.027050 kernel: efifb: probing for efifb Jul 7 06:15:10.027059 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 7 06:15:10.027068 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 7 06:15:10.027078 kernel: efifb: scrolling: redraw Jul 7 06:15:10.027087 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 06:15:10.027096 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 06:15:10.027107 kernel: fb0: EFI VGA frame buffer device Jul 7 06:15:10.027116 kernel: pstore: Using crash dump compression: deflate Jul 7 06:15:10.027126 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 06:15:10.027135 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:15:10.027144 kernel: Segment Routing with IPv6 Jul 7 06:15:10.027152 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:15:10.027162 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:15:10.027171 kernel: Key type dns_resolver registered Jul 7 06:15:10.027180 kernel: IPI shorthand broadcast: enabled Jul 7 06:15:10.027192 kernel: sched_clock: Marking stable (3062004263, 94009684)->(3501666426, -345652479) Jul 7 06:15:10.027201 kernel: registered taskstats version 1 Jul 7 06:15:10.027210 kernel: Loading compiled-in X.509 certificates Jul 7 06:15:10.027219 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:15:10.027228 kernel: Demotion targets for Node 0: null Jul 7 06:15:10.027237 kernel: Key type .fscrypt registered Jul 7 06:15:10.027246 kernel: Key type fscrypt-provisioning registered Jul 7 06:15:10.027255 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:15:10.027264 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:15:10.027276 kernel: ima: No architecture policies found Jul 7 06:15:10.027284 kernel: clk: Disabling unused clocks Jul 7 06:15:10.027293 kernel: Warning: unable to open an initial console. Jul 7 06:15:10.027302 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:15:10.027311 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:15:10.027320 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:15:10.027329 kernel: Run /init as init process Jul 7 06:15:10.027338 kernel: with arguments: Jul 7 06:15:10.027347 kernel: /init Jul 7 06:15:10.027358 kernel: with environment: Jul 7 06:15:10.027367 kernel: HOME=/ Jul 7 06:15:10.027375 kernel: TERM=linux Jul 7 06:15:10.027382 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:15:10.027393 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:15:10.027405 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:15:10.027414 systemd[1]: Detected virtualization microsoft. Jul 7 06:15:10.027424 systemd[1]: Detected architecture x86-64. Jul 7 06:15:10.027433 systemd[1]: Running in initrd. Jul 7 06:15:10.027442 systemd[1]: No hostname configured, using default hostname. Jul 7 06:15:10.027451 systemd[1]: Hostname set to . Jul 7 06:15:10.027460 systemd[1]: Initializing machine ID from random generator. Jul 7 06:15:10.027469 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:15:10.027478 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:15:10.027487 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:15:10.027499 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:15:10.027509 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:15:10.027518 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:15:10.027528 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:15:10.027538 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:15:10.027547 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:15:10.027557 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:15:10.027568 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:15:10.027578 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:15:10.027587 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:15:10.027596 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:15:10.027605 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:15:10.027613 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:15:10.027622 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:15:10.027630 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:15:10.027638 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:15:10.027649 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:15:10.027658 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:15:10.027667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:15:10.027675 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:15:10.027683 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:15:10.027691 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:15:10.027700 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:15:10.027708 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:15:10.027720 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:15:10.027729 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:15:10.027738 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:15:10.027758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:10.027770 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:15:10.027782 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:15:10.027792 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:15:10.027821 systemd-journald[205]: Collecting audit messages is disabled. Jul 7 06:15:10.028031 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:15:10.028046 systemd-journald[205]: Journal started Jul 7 06:15:10.028069 systemd-journald[205]: Runtime Journal (/run/log/journal/4fa79aa6e666457998277a8614b6a90e) is 8M, max 158.9M, 150.9M free. Jul 7 06:15:10.011497 systemd-modules-load[206]: Inserted module 'overlay' Jul 7 06:15:10.035560 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:15:10.041076 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:10.048784 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:15:10.054315 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:15:10.058555 systemd-modules-load[206]: Inserted module 'br_netfilter' Jul 7 06:15:10.060925 kernel: Bridge firewalling registered Jul 7 06:15:10.061237 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:15:10.065276 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:15:10.069206 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:15:10.076218 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:15:10.080307 systemd-tmpfiles[222]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:15:10.080953 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:15:10.093134 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:15:10.096877 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:15:10.103041 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:15:10.104308 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:15:10.110555 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:15:10.116153 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:15:10.141875 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:15:10.168949 systemd-resolved[244]: Positive Trust Anchors: Jul 7 06:15:10.169272 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:15:10.169310 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:15:10.186863 systemd-resolved[244]: Defaulting to hostname 'linux'. Jul 7 06:15:10.189561 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:15:10.192728 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:15:10.212852 kernel: SCSI subsystem initialized Jul 7 06:15:10.219859 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:15:10.229862 kernel: iscsi: registered transport (tcp) Jul 7 06:15:10.247222 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:15:10.247259 kernel: QLogic iSCSI HBA Driver Jul 7 06:15:10.259678 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:15:10.267966 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:15:10.269419 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:15:10.300898 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:15:10.306128 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:15:10.346862 kernel: raid6: avx512x4 gen() 32641 MB/s Jul 7 06:15:10.363849 kernel: raid6: avx512x2 gen() 33070 MB/s Jul 7 06:15:10.380846 kernel: raid6: avx512x1 gen() 27525 MB/s Jul 7 06:15:10.398851 kernel: raid6: avx2x4 gen() 29465 MB/s Jul 7 06:15:10.415848 kernel: raid6: avx2x2 gen() 30936 MB/s Jul 7 06:15:10.433429 kernel: raid6: avx2x1 gen() 18846 MB/s Jul 7 06:15:10.433455 kernel: raid6: using algorithm avx512x2 gen() 33070 MB/s Jul 7 06:15:10.452156 kernel: raid6: .... xor() 30308 MB/s, rmw enabled Jul 7 06:15:10.452247 kernel: raid6: using avx512x2 recovery algorithm Jul 7 06:15:10.471865 kernel: xor: automatically using best checksumming function avx Jul 7 06:15:10.586859 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:15:10.591410 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:15:10.594795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:15:10.620802 systemd-udevd[453]: Using default interface naming scheme 'v255'. Jul 7 06:15:10.624989 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:15:10.631434 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:15:10.649981 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jul 7 06:15:10.667437 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:15:10.671178 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:15:10.703294 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:15:10.709419 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:15:10.754851 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:15:10.759859 kernel: hv_vmbus: Vmbus version:5.3 Jul 7 06:15:10.781206 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:15:10.790199 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:10.795927 kernel: AES CTR mode by8 optimization enabled Jul 7 06:15:10.794672 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:10.801160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:10.810576 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 06:15:10.810612 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 06:15:10.816293 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 06:15:10.816324 kernel: hv_vmbus: registering driver hv_netvsc Jul 7 06:15:10.818860 kernel: PTP clock support registered Jul 7 06:15:10.823859 kernel: hv_vmbus: registering driver hid_hyperv Jul 7 06:15:10.827048 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 7 06:15:10.826735 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:15:10.833776 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 7 06:15:10.826802 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:10.838270 kernel: hv_utils: Registering HyperV Utility Driver Jul 7 06:15:10.838302 kernel: hv_vmbus: registering driver hv_utils Jul 7 06:15:10.838522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:10.853147 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd101557 (unnamed net_device) (uninitialized): VF slot 1 added Jul 7 06:15:10.853360 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 7 06:15:10.872858 kernel: hv_utils: Shutdown IC version 3.2 Jul 7 06:15:10.878861 kernel: hv_utils: Heartbeat IC version 3.0 Jul 7 06:15:10.878897 kernel: hv_utils: TimeSync IC version 4.0 Jul 7 06:15:11.236583 systemd-resolved[244]: Clock change detected. Flushing caches. Jul 7 06:15:11.243003 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 7 06:15:11.251858 kernel: hv_vmbus: registering driver hv_storvsc Jul 7 06:15:11.253864 kernel: scsi host0: storvsc_host_t Jul 7 06:15:11.257480 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 7 06:15:11.261509 kernel: hv_vmbus: registering driver hv_pci Jul 7 06:15:11.264201 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:11.267667 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jul 7 06:15:11.273341 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jul 7 06:15:11.273497 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jul 7 06:15:11.275019 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 06:15:11.284818 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jul 7 06:15:11.284865 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jul 7 06:15:11.288668 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 7 06:15:11.288829 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 06:15:11.290187 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 7 06:15:11.297923 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jul 7 06:15:11.301211 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 06:15:11.301450 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jul 7 06:15:11.318660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 06:15:11.318838 kernel: nvme nvme0: pci function c05b:00:00.0 Jul 7 06:15:11.318969 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jul 7 06:15:11.336985 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 06:15:11.574822 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 7 06:15:11.579822 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:15:11.925829 kernel: nvme nvme0: using unchecked data buffer Jul 7 06:15:12.117836 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jul 7 06:15:12.156508 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 7 06:15:12.181639 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 7 06:15:12.183971 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 7 06:15:12.189465 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:15:12.207504 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jul 7 06:15:12.212824 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:15:12.216364 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:15:12.224618 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:15:12.229498 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:15:12.231761 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:15:12.242824 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jul 7 06:15:12.242919 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:15:12.252102 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jul 7 06:15:12.252282 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jul 7 06:15:12.254284 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 06:15:12.256920 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jul 7 06:15:12.261843 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jul 7 06:15:12.269978 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jul 7 06:15:12.270032 kernel: pci 7870:00:00.0: enabling Extended Tags Jul 7 06:15:12.292830 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 06:15:12.297827 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jul 7 06:15:12.300923 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jul 7 06:15:12.315731 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:15:12.323968 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jul 7 06:15:12.332842 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jul 7 06:15:12.337438 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd101557 eth0: VF registering: eth1 Jul 7 06:15:12.337666 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jul 7 06:15:12.365531 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jul 7 06:15:13.229667 disk-uuid[672]: The operation has completed successfully. Jul 7 06:15:13.232426 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:15:13.289562 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:15:13.289663 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:15:13.327671 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:15:13.344863 sh[718]: Success Jul 7 06:15:13.377924 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:15:13.377991 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:15:13.378007 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:15:13.388824 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 06:15:13.642939 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:15:13.647104 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:15:13.661944 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:15:13.672768 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:15:13.672821 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (731) Jul 7 06:15:13.676315 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:15:13.676351 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:15:13.677231 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:15:14.417755 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:15:14.420383 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:15:14.423042 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:15:14.423942 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:15:14.433918 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:15:14.460829 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (764) Jul 7 06:15:14.464958 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:14.469159 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:15:14.469197 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:15:14.492864 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:14.493796 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:15:14.499944 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:15:14.517483 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:15:14.521920 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:15:14.550161 systemd-networkd[900]: lo: Link UP Jul 7 06:15:14.550170 systemd-networkd[900]: lo: Gained carrier Jul 7 06:15:14.552004 systemd-networkd[900]: Enumeration completed Jul 7 06:15:14.558948 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 7 06:15:14.559194 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 7 06:15:14.552387 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:15:14.566858 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd101557 eth0: Data path switched to VF: enP30832s1 Jul 7 06:15:14.552390 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:15:14.552709 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:15:14.555537 systemd[1]: Reached target network.target - Network. Jul 7 06:15:14.563631 systemd-networkd[900]: enP30832s1: Link UP Jul 7 06:15:14.563794 systemd-networkd[900]: eth0: Link UP Jul 7 06:15:14.564108 systemd-networkd[900]: eth0: Gained carrier Jul 7 06:15:14.564118 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:15:14.576975 systemd-networkd[900]: enP30832s1: Gained carrier Jul 7 06:15:14.599832 systemd-networkd[900]: eth0: DHCPv4 address 10.200.4.8/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jul 7 06:15:15.513403 ignition[881]: Ignition 2.21.0 Jul 7 06:15:15.513936 ignition[881]: Stage: fetch-offline Jul 7 06:15:15.516969 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:15:15.514086 ignition[881]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:15.518961 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 06:15:15.514094 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:15:15.514209 ignition[881]: parsed url from cmdline: "" Jul 7 06:15:15.514215 ignition[881]: no config URL provided Jul 7 06:15:15.514221 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:15:15.514228 ignition[881]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:15:15.514235 ignition[881]: failed to fetch config: resource requires networking Jul 7 06:15:15.514458 ignition[881]: Ignition finished successfully Jul 7 06:15:15.539756 ignition[911]: Ignition 2.21.0 Jul 7 06:15:15.539762 ignition[911]: Stage: fetch Jul 7 06:15:15.539999 ignition[911]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:15.540014 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:15:15.540095 ignition[911]: parsed url from cmdline: "" Jul 7 06:15:15.540097 ignition[911]: no config URL provided Jul 7 06:15:15.540102 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:15:15.540108 ignition[911]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:15:15.540154 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 7 06:15:15.605729 ignition[911]: GET result: OK Jul 7 06:15:15.606041 ignition[911]: config has been read from IMDS userdata Jul 7 06:15:15.606072 ignition[911]: parsing config with SHA512: 31d11c0ac03f414b61177c538c17e1af08b1fdf465dfefa03fe5cd8ff4d5f72c002ff4f89e66d2167109eabc8a5a371e86de78951e140eeec8b04fc55769ee2b Jul 7 06:15:15.612377 unknown[911]: fetched base config from "system" Jul 7 06:15:15.612385 unknown[911]: fetched base config from "system" Jul 7 06:15:15.612684 ignition[911]: fetch: fetch complete Jul 7 06:15:15.612390 unknown[911]: fetched user config from "azure" Jul 7 06:15:15.612688 ignition[911]: fetch: fetch passed Jul 7 06:15:15.615431 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 06:15:15.612724 ignition[911]: Ignition finished successfully Jul 7 06:15:15.621873 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:15:15.642758 ignition[917]: Ignition 2.21.0 Jul 7 06:15:15.642767 ignition[917]: Stage: kargs Jul 7 06:15:15.644851 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:15:15.642969 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:15.649458 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:15:15.642977 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:15:15.643730 ignition[917]: kargs: kargs passed Jul 7 06:15:15.643765 ignition[917]: Ignition finished successfully Jul 7 06:15:15.667238 ignition[924]: Ignition 2.21.0 Jul 7 06:15:15.667248 ignition[924]: Stage: disks Jul 7 06:15:15.667402 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:15.669965 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:15:15.667407 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:15:15.672838 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:15:15.668625 ignition[924]: disks: disks passed Jul 7 06:15:15.668666 ignition[924]: Ignition finished successfully Jul 7 06:15:15.679865 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:15:15.680401 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:15:15.685854 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:15:15.688225 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:15:15.692933 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:15:15.697921 systemd-networkd[900]: enP30832s1: Gained IPv6LL Jul 7 06:15:15.780285 systemd-fsck[932]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 7 06:15:15.798648 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:15:15.803774 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:15:16.113829 kernel: EXT4-fs (nvme0n1p9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:15:16.115028 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:15:16.115903 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:15:16.131916 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:15:16.137950 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:15:16.143933 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 06:15:16.146945 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:15:16.146980 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:15:16.156818 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (941) Jul 7 06:15:16.163033 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:16.163076 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:15:16.163090 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:15:16.163002 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:15:16.166609 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:15:16.174034 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:15:16.462976 systemd-networkd[900]: eth0: Gained IPv6LL Jul 7 06:15:16.710498 coreos-metadata[943]: Jul 07 06:15:16.710 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 06:15:16.714460 coreos-metadata[943]: Jul 07 06:15:16.714 INFO Fetch successful Jul 7 06:15:16.715917 coreos-metadata[943]: Jul 07 06:15:16.715 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 7 06:15:16.727917 coreos-metadata[943]: Jul 07 06:15:16.727 INFO Fetch successful Jul 7 06:15:16.744405 coreos-metadata[943]: Jul 07 06:15:16.744 INFO wrote hostname ci-4372.0.1-a-ca7a3a169f to /sysroot/etc/hostname Jul 7 06:15:16.747357 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:15:16.982199 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:15:17.036490 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:15:17.040227 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:15:17.045351 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:15:17.734127 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:15:17.739105 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:15:17.743158 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:15:17.753695 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:15:17.757505 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:17.782504 ignition[1061]: INFO : Ignition 2.21.0 Jul 7 06:15:17.782504 ignition[1061]: INFO : Stage: mount Jul 7 06:15:17.782504 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:17.782504 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:15:17.795899 ignition[1061]: INFO : mount: mount passed Jul 7 06:15:17.795899 ignition[1061]: INFO : Ignition finished successfully Jul 7 06:15:17.782746 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:15:17.788285 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:15:17.795171 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:15:17.806883 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:15:17.823824 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1073) Jul 7 06:15:17.826289 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:17.826318 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:15:17.826404 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:15:17.831154 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:15:17.851726 ignition[1089]: INFO : Ignition 2.21.0 Jul 7 06:15:17.851726 ignition[1089]: INFO : Stage: files Jul 7 06:15:17.855850 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:17.855850 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:15:17.855850 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:15:17.868423 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:15:17.868423 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:15:17.909766 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:15:17.913901 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:15:17.913901 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:15:17.910147 unknown[1089]: wrote ssh authorized keys file for user: core Jul 7 06:15:18.122515 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 06:15:18.126557 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 7 06:15:18.190502 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:15:18.457049 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 06:15:18.461905 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:15:18.461905 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:15:18.461905 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:15:18.461905 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:15:18.461905 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:15:18.461905 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:15:18.461905 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:15:18.461905 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:15:18.483839 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:15:18.483839 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:15:18.483839 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:15:18.483839 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:15:18.483839 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:15:18.483839 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 7 06:15:19.267872 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:15:19.452722 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:15:19.452722 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:15:19.514525 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:15:19.523471 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:15:19.523471 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:15:19.523471 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:15:19.530949 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:15:19.530949 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:15:19.530949 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:15:19.530949 ignition[1089]: INFO : files: files passed Jul 7 06:15:19.530949 ignition[1089]: INFO : Ignition finished successfully Jul 7 06:15:19.531655 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:15:19.534847 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:15:19.550921 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:15:19.559421 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:15:19.559622 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:15:19.566016 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:15:19.566016 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:15:19.579864 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:15:19.570855 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:15:19.572242 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:15:19.573561 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:15:19.609105 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:15:19.609205 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:15:19.610584 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:15:19.610647 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:15:19.610964 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:15:19.611749 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:15:19.643043 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:15:19.646919 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:15:19.664683 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:15:19.664889 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:15:19.665132 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:15:19.665426 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:15:19.665538 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:15:19.666037 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:15:19.666561 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:15:19.676469 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:15:19.684680 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:15:19.684965 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:15:19.691550 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:15:19.693242 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:15:19.696948 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:15:19.699713 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:15:19.703957 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:15:19.706315 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:15:19.709915 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:15:19.710039 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:15:19.714156 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:15:19.715482 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:15:19.715774 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:15:19.716205 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:15:19.720618 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:15:19.720719 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:15:19.730464 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:15:19.730592 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:15:19.734986 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:15:19.735106 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:15:19.738972 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 06:15:19.739085 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:15:19.746007 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:15:19.748891 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:15:19.749570 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:15:19.755992 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:15:19.778503 ignition[1144]: INFO : Ignition 2.21.0 Jul 7 06:15:19.778503 ignition[1144]: INFO : Stage: umount Jul 7 06:15:19.778503 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:19.778503 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:15:19.778503 ignition[1144]: INFO : umount: umount passed Jul 7 06:15:19.778503 ignition[1144]: INFO : Ignition finished successfully Jul 7 06:15:19.768364 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:15:19.768506 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:15:19.772343 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:15:19.774425 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:15:19.793037 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:15:19.793121 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:15:19.800479 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:15:19.800604 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:15:19.804958 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:15:19.805546 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:15:19.805607 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:15:19.809211 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:15:19.809258 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:15:19.812884 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 06:15:19.812923 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 06:15:19.816874 systemd[1]: Stopped target network.target - Network. Jul 7 06:15:19.820846 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:15:19.820892 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:15:19.823162 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:15:19.824391 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:15:19.827835 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:15:19.830835 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:15:19.832917 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:15:19.835878 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:15:19.835919 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:15:19.839864 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:15:19.839893 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:15:19.843858 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:15:19.843904 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:15:19.847858 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:15:19.847890 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:15:19.852062 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:15:19.856979 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:15:19.861103 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:15:19.861200 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:15:19.864385 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:15:19.864786 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:15:19.872897 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:15:19.872936 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:15:19.877352 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:15:19.914518 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd101557 eth0: Data path switched from VF: enP30832s1 Jul 7 06:15:19.914698 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 7 06:15:19.881851 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:15:19.881904 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:15:19.884362 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:15:19.884771 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:15:19.888473 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:15:19.896478 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:15:19.898304 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:15:19.898353 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:15:19.903281 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:15:19.903328 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:15:19.910302 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:15:19.910351 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:15:19.916613 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:15:19.916668 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:15:19.916944 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:15:19.917069 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:15:19.918501 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:15:19.918573 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:15:19.918654 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:15:19.918676 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:15:19.918920 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:15:19.918969 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:15:19.919607 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:15:19.919640 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:15:19.919904 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:15:19.919939 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:15:19.921610 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:15:19.921679 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:15:19.921720 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:15:19.926191 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:15:19.926232 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:15:19.944931 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:15:19.944984 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:19.950770 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 06:15:19.950831 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 06:15:19.950871 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:15:19.951231 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:15:19.951323 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:15:19.951770 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:15:19.951889 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:15:20.750892 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:15:20.751020 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:15:20.752710 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:15:20.752852 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:15:20.752911 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:15:20.753951 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:15:20.767199 systemd[1]: Switching root. Jul 7 06:15:20.832750 systemd-journald[205]: Journal stopped Jul 7 06:15:32.369244 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jul 7 06:15:32.369279 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:15:32.369292 kernel: SELinux: policy capability open_perms=1 Jul 7 06:15:32.369301 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:15:32.369309 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:15:32.369318 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:15:32.369330 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:15:32.369339 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:15:32.369349 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:15:32.369358 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:15:32.369367 kernel: audit: type=1403 audit(1751868927.505:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:15:32.369378 systemd[1]: Successfully loaded SELinux policy in 227.689ms. Jul 7 06:15:32.369388 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.128ms. Jul 7 06:15:32.369401 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:15:32.369413 systemd[1]: Detected virtualization microsoft. Jul 7 06:15:32.369422 systemd[1]: Detected architecture x86-64. Jul 7 06:15:32.369432 systemd[1]: Detected first boot. Jul 7 06:15:32.369442 systemd[1]: Hostname set to . Jul 7 06:15:32.369453 systemd[1]: Initializing machine ID from random generator. Jul 7 06:15:32.369463 zram_generator::config[1187]: No configuration found. Jul 7 06:15:32.369473 kernel: Guest personality initialized and is inactive Jul 7 06:15:32.369482 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jul 7 06:15:32.369491 kernel: Initialized host personality Jul 7 06:15:32.369499 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:15:32.369509 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:15:32.369522 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:15:32.369532 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:15:32.369541 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:15:32.369550 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:15:32.369562 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:15:32.369572 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:15:32.369582 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:15:32.369593 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:15:32.369603 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:15:32.369613 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:15:32.369623 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:15:32.369633 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:15:32.369642 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:15:32.369650 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:15:32.369659 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:15:32.369670 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:15:32.369680 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:15:32.369689 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:15:32.369699 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:15:32.369709 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:15:32.369718 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:15:32.369729 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:15:32.369738 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:15:32.369750 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:15:32.369760 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:15:32.369771 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:15:32.369781 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:15:32.369790 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:15:32.369800 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:15:32.369829 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:15:32.369845 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:15:32.369857 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:15:32.369866 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:15:32.369875 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:15:32.369884 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:15:32.369894 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:15:32.369906 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:15:32.369917 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:15:32.369928 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:15:32.369938 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:32.369949 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:15:32.369960 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:15:32.369971 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:15:32.369982 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:15:32.369995 systemd[1]: Reached target machines.target - Containers. Jul 7 06:15:32.370006 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:15:32.370017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:15:32.370028 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:15:32.370038 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:15:32.370049 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:15:32.370060 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:15:32.370070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:15:32.370081 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:15:32.370093 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:15:32.370105 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:15:32.370117 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:15:32.370128 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:15:32.370139 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:15:32.370150 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:15:32.370161 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:15:32.370172 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:15:32.370185 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:15:32.370196 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:15:32.370207 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:15:32.370218 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:15:32.370229 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:15:32.370239 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:15:32.370250 systemd[1]: Stopped verity-setup.service. Jul 7 06:15:32.370261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:32.370274 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:15:32.370285 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:15:32.370295 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:15:32.370306 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:15:32.370316 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:15:32.370326 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:15:32.370337 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:15:32.370348 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:15:32.370359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:15:32.370371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:15:32.370381 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:15:32.370392 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:15:32.370403 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:15:32.370413 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:15:32.370423 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:15:32.370434 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:15:32.370444 kernel: loop: module loaded Jul 7 06:15:32.370456 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:15:32.370466 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:15:32.370476 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:15:32.370486 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:15:32.370497 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:15:32.370508 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:15:32.370519 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:15:32.370536 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:15:32.370547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:15:32.370558 kernel: fuse: init (API version 7.41) Jul 7 06:15:32.370570 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:15:32.370583 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:15:32.370594 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:15:32.370631 systemd-journald[1270]: Collecting audit messages is disabled. Jul 7 06:15:32.370659 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:15:32.370672 systemd-journald[1270]: Journal started Jul 7 06:15:32.370698 systemd-journald[1270]: Runtime Journal (/run/log/journal/b40af018c7a04a839d9660a69aa2502a) is 8M, max 158.9M, 150.9M free. Jul 7 06:15:31.744441 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:15:31.755504 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 06:15:31.755993 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:15:32.377672 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:15:32.381851 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:15:32.390108 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:15:32.394582 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:15:32.394743 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:15:32.397287 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:15:32.400268 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:15:32.419982 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:15:32.425832 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:15:32.435216 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:15:32.443476 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:15:32.559827 kernel: loop0: detected capacity change from 0 to 28496 Jul 7 06:15:32.587824 kernel: ACPI: bus type drm_connector registered Jul 7 06:15:32.588008 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:15:32.588172 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:15:32.669969 systemd-journald[1270]: Time spent on flushing to /var/log/journal/b40af018c7a04a839d9660a69aa2502a is 41.007ms for 984 entries. Jul 7 06:15:32.669969 systemd-journald[1270]: System Journal (/var/log/journal/b40af018c7a04a839d9660a69aa2502a) is 8M, max 2.6G, 2.6G free. Jul 7 06:15:34.147012 systemd-journald[1270]: Received client request to flush runtime journal. Jul 7 06:15:32.762960 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:15:32.796162 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:15:32.799004 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:15:32.801253 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:15:34.009172 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:15:34.013331 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:15:34.148058 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:15:34.319830 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:15:34.555828 kernel: loop1: detected capacity change from 0 to 229808 Jul 7 06:15:35.014099 kernel: loop2: detected capacity change from 0 to 113872 Jul 7 06:15:35.111224 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:15:35.114602 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:15:35.234128 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Jul 7 06:15:35.234139 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Jul 7 06:15:35.236947 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:15:36.778833 kernel: loop3: detected capacity change from 0 to 146240 Jul 7 06:15:37.554792 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:15:37.556421 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:15:37.627174 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:15:37.630324 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:15:37.666009 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Jul 7 06:15:37.793077 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:15:37.799124 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:15:37.836851 kernel: loop4: detected capacity change from 0 to 28496 Jul 7 06:15:37.862142 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:15:37.864866 kernel: loop5: detected capacity change from 0 to 229808 Jul 7 06:15:37.885836 kernel: loop6: detected capacity change from 0 to 113872 Jul 7 06:15:37.890638 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:15:37.906865 kernel: loop7: detected capacity change from 0 to 146240 Jul 7 06:15:37.919418 (sd-merge)[1383]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 7 06:15:37.920071 (sd-merge)[1383]: Merged extensions into '/usr'. Jul 7 06:15:37.927725 systemd[1]: Reload requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:15:37.927740 systemd[1]: Reloading... Jul 7 06:15:37.946820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#40 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 06:15:38.034906 zram_generator::config[1428]: No configuration found. Jul 7 06:15:38.054871 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:15:38.059593 kernel: hv_vmbus: registering driver hv_balloon Jul 7 06:15:38.059646 kernel: hv_vmbus: registering driver hyperv_fb Jul 7 06:15:38.067252 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 7 06:15:38.067307 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 7 06:15:38.067324 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 7 06:15:38.069827 kernel: Console: switching to colour dummy device 80x25 Jul 7 06:15:38.078018 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 06:15:38.250510 systemd-networkd[1361]: lo: Link UP Jul 7 06:15:38.250524 systemd-networkd[1361]: lo: Gained carrier Jul 7 06:15:38.258619 systemd-networkd[1361]: Enumeration completed Jul 7 06:15:38.262161 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:15:38.262173 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:15:38.263875 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 7 06:15:38.269688 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:15:38.271830 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 7 06:15:38.276871 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd101557 eth0: Data path switched to VF: enP30832s1 Jul 7 06:15:38.281005 systemd-networkd[1361]: enP30832s1: Link UP Jul 7 06:15:38.281142 systemd-networkd[1361]: eth0: Link UP Jul 7 06:15:38.281150 systemd-networkd[1361]: eth0: Gained carrier Jul 7 06:15:38.281165 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:15:38.285021 systemd-networkd[1361]: enP30832s1: Gained carrier Jul 7 06:15:38.293861 systemd-networkd[1361]: eth0: DHCPv4 address 10.200.4.8/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jul 7 06:15:38.486233 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 7 06:15:38.488567 systemd[1]: Reloading finished in 560 ms. Jul 7 06:15:38.501820 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 7 06:15:38.514157 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:15:38.515688 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:15:38.519068 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:15:38.561733 systemd[1]: Starting ensure-sysext.service... Jul 7 06:15:38.565929 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:15:38.578410 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:15:38.584459 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:15:38.588414 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:15:38.593664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:38.616839 systemd-tmpfiles[1525]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:15:38.616866 systemd-tmpfiles[1525]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:15:38.617048 systemd-tmpfiles[1525]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:15:38.617188 systemd[1]: Reload requested from client PID 1521 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:15:38.617199 systemd[1]: Reloading... Jul 7 06:15:38.617243 systemd-tmpfiles[1525]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:15:38.618824 systemd-tmpfiles[1525]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:15:38.619061 systemd-tmpfiles[1525]: ACLs are not supported, ignoring. Jul 7 06:15:38.619100 systemd-tmpfiles[1525]: ACLs are not supported, ignoring. Jul 7 06:15:38.623369 systemd-tmpfiles[1525]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:15:38.623378 systemd-tmpfiles[1525]: Skipping /boot Jul 7 06:15:38.634545 systemd-tmpfiles[1525]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:15:38.634947 systemd-tmpfiles[1525]: Skipping /boot Jul 7 06:15:38.697865 zram_generator::config[1568]: No configuration found. Jul 7 06:15:38.773228 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:15:38.866257 systemd[1]: Reloading finished in 248 ms. Jul 7 06:15:38.893032 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:15:38.896153 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:15:38.899077 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:15:38.902085 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:38.909263 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:15:38.913098 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:15:38.918758 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:15:38.923586 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:15:38.928885 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:15:38.935760 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:38.935995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:15:38.941392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:15:38.947297 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:15:38.952473 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:15:38.955078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:15:38.955198 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:15:38.955289 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:38.956239 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:15:38.956770 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:15:38.966565 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:38.967298 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:15:38.969784 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:15:38.972148 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:15:38.972349 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:15:38.972507 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:38.973694 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:15:38.974784 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:15:38.978344 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:15:38.978987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:15:38.982105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:15:38.982248 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:15:38.987866 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:15:38.995763 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:15:39.002423 systemd[1]: Finished ensure-sysext.service. Jul 7 06:15:39.004612 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:39.004866 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:15:39.005606 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:15:39.010977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:15:39.013898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:15:39.020690 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:15:39.020869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:15:39.020900 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:15:39.020945 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:15:39.021006 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:39.029617 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:15:39.029789 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:15:39.033521 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:15:39.033666 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:15:39.035696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:15:39.035844 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:15:39.038448 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:15:39.039692 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:15:39.039984 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:15:39.044435 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:15:39.048800 augenrules[1671]: No rules Jul 7 06:15:39.049657 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:15:39.049856 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:15:39.080485 systemd-resolved[1631]: Positive Trust Anchors: Jul 7 06:15:39.080498 systemd-resolved[1631]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:15:39.080530 systemd-resolved[1631]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:15:39.083591 systemd-resolved[1631]: Using system hostname 'ci-4372.0.1-a-ca7a3a169f'. Jul 7 06:15:39.085175 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:15:39.086609 systemd[1]: Reached target network.target - Network. Jul 7 06:15:39.088026 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:15:39.358241 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:15:39.361402 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:15:40.015040 systemd-networkd[1361]: eth0: Gained IPv6LL Jul 7 06:15:40.017585 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:15:40.021186 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:15:40.334989 systemd-networkd[1361]: enP30832s1: Gained IPv6LL Jul 7 06:15:41.032037 ldconfig[1280]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:15:41.061043 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:15:41.065500 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:15:41.087128 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:15:41.090032 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:15:41.092997 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:15:41.095886 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:15:41.098864 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:15:41.100390 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:15:41.101927 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:15:41.104873 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:15:41.107866 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:15:41.107904 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:15:41.109092 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:15:41.113131 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:15:41.118016 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:15:41.121675 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:15:41.124977 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:15:41.127866 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:15:41.149297 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:15:41.153174 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:15:41.156344 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:15:41.158582 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:15:41.159909 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:15:41.161296 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:15:41.161318 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:15:41.163306 systemd[1]: Starting chronyd.service - NTP client/server... Jul 7 06:15:41.166900 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:15:41.177477 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 06:15:41.183993 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:15:41.187758 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:15:41.194262 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:15:41.198959 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:15:41.201306 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:15:41.202155 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:15:41.205117 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jul 7 06:15:41.208616 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 7 06:15:41.210907 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 7 06:15:41.213950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:15:41.216354 jq[1689]: false Jul 7 06:15:41.224405 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:15:41.227248 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:15:41.228734 google_oslogin_nss_cache[1694]: oslogin_cache_refresh[1694]: Refreshing passwd entry cache Jul 7 06:15:41.230958 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:15:41.232853 oslogin_cache_refresh[1694]: Refreshing passwd entry cache Jul 7 06:15:41.233505 KVP[1695]: KVP starting; pid is:1695 Jul 7 06:15:41.240835 kernel: hv_utils: KVP IC version 4.0 Jul 7 06:15:41.240051 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:15:41.240074 KVP[1695]: KVP LIC Version: 3.1 Jul 7 06:15:41.243953 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:15:41.252960 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:15:41.257722 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:15:41.261107 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:15:41.262038 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:15:41.267977 extend-filesystems[1693]: Found /dev/nvme0n1p6 Jul 7 06:15:41.268627 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:15:41.272422 oslogin_cache_refresh[1694]: Failure getting users, quitting Jul 7 06:15:41.273927 google_oslogin_nss_cache[1694]: oslogin_cache_refresh[1694]: Failure getting users, quitting Jul 7 06:15:41.273927 google_oslogin_nss_cache[1694]: oslogin_cache_refresh[1694]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:15:41.273927 google_oslogin_nss_cache[1694]: oslogin_cache_refresh[1694]: Refreshing group entry cache Jul 7 06:15:41.272440 oslogin_cache_refresh[1694]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:15:41.272477 oslogin_cache_refresh[1694]: Refreshing group entry cache Jul 7 06:15:41.282231 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:15:41.285460 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:15:41.287165 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:15:41.290411 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:15:41.292069 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:15:41.295007 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:15:41.295193 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:15:41.297543 google_oslogin_nss_cache[1694]: oslogin_cache_refresh[1694]: Failure getting groups, quitting Jul 7 06:15:41.297543 google_oslogin_nss_cache[1694]: oslogin_cache_refresh[1694]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:15:41.297506 oslogin_cache_refresh[1694]: Failure getting groups, quitting Jul 7 06:15:41.297515 oslogin_cache_refresh[1694]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:15:41.300226 extend-filesystems[1693]: Found /dev/nvme0n1p9 Jul 7 06:15:41.307386 extend-filesystems[1693]: Checking size of /dev/nvme0n1p9 Jul 7 06:15:41.311697 jq[1712]: true Jul 7 06:15:41.302271 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:15:41.302464 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:15:41.306748 (chronyd)[1684]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 7 06:15:41.316841 chronyd[1728]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 7 06:15:41.332525 (ntainerd)[1734]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:15:41.334909 chronyd[1728]: Timezone right/UTC failed leap second check, ignoring Jul 7 06:15:41.335072 chronyd[1728]: Loaded seccomp filter (level 2) Jul 7 06:15:41.336886 systemd[1]: Started chronyd.service - NTP client/server. Jul 7 06:15:41.340203 jq[1726]: true Jul 7 06:15:41.348728 extend-filesystems[1693]: Old size kept for /dev/nvme0n1p9 Jul 7 06:15:41.352438 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:15:41.352613 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:15:41.372698 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:15:41.403023 tar[1720]: linux-amd64/LICENSE Jul 7 06:15:41.403023 tar[1720]: linux-amd64/helm Jul 7 06:15:41.442751 systemd-logind[1706]: New seat seat0. Jul 7 06:15:41.448536 systemd-logind[1706]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 7 06:15:41.448677 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:15:41.456113 dbus-daemon[1687]: [system] SELinux support is enabled Jul 7 06:15:41.456783 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:15:41.463978 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:15:41.475010 bash[1765]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:15:41.464005 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:15:41.466604 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:15:41.466626 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:15:41.472031 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:15:41.476710 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:15:41.488401 dbus-daemon[1687]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 06:15:41.491576 update_engine[1709]: I20250707 06:15:41.491489 1709 main.cc:92] Flatcar Update Engine starting Jul 7 06:15:41.500107 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:15:41.502910 update_engine[1709]: I20250707 06:15:41.500324 1709 update_check_scheduler.cc:74] Next update check in 3m22s Jul 7 06:15:41.555156 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:15:41.605321 coreos-metadata[1686]: Jul 07 06:15:41.604 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 06:15:41.607959 coreos-metadata[1686]: Jul 07 06:15:41.607 INFO Fetch successful Jul 7 06:15:41.608755 coreos-metadata[1686]: Jul 07 06:15:41.608 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 7 06:15:41.612166 coreos-metadata[1686]: Jul 07 06:15:41.612 INFO Fetch successful Jul 7 06:15:41.614217 coreos-metadata[1686]: Jul 07 06:15:41.614 INFO Fetching http://168.63.129.16/machine/0cf3c632-2e4d-4056-9161-b6d990f8abe5/9cde8bfe%2Dc617%2D4346%2D9a33%2D69a59786bb45.%5Fci%2D4372.0.1%2Da%2Dca7a3a169f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 7 06:15:41.615979 coreos-metadata[1686]: Jul 07 06:15:41.615 INFO Fetch successful Jul 7 06:15:41.616313 coreos-metadata[1686]: Jul 07 06:15:41.616 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 7 06:15:41.625143 coreos-metadata[1686]: Jul 07 06:15:41.625 INFO Fetch successful Jul 7 06:15:41.700426 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 06:15:41.703700 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:15:41.820952 sshd_keygen[1722]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:15:41.873386 locksmithd[1777]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:15:41.900539 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:15:41.906942 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:15:41.910991 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 7 06:15:41.942543 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:15:41.942733 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:15:41.950028 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:15:41.962900 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 7 06:15:42.001923 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:15:42.008755 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:15:42.015255 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:15:42.018558 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:15:42.221040 tar[1720]: linux-amd64/README.md Jul 7 06:15:42.236091 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:15:42.241990 containerd[1734]: time="2025-07-07T06:15:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:15:42.243040 containerd[1734]: time="2025-07-07T06:15:42.242530018Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:15:42.252324 containerd[1734]: time="2025-07-07T06:15:42.252006526Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.992µs" Jul 7 06:15:42.252324 containerd[1734]: time="2025-07-07T06:15:42.252038934Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:15:42.252324 containerd[1734]: time="2025-07-07T06:15:42.252060664Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:15:42.252324 containerd[1734]: time="2025-07-07T06:15:42.252188271Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:15:42.252324 containerd[1734]: time="2025-07-07T06:15:42.252205991Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:15:42.252324 containerd[1734]: time="2025-07-07T06:15:42.252230838Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:15:42.252324 containerd[1734]: time="2025-07-07T06:15:42.252281377Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:15:42.252324 containerd[1734]: time="2025-07-07T06:15:42.252291407Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:15:42.252562 containerd[1734]: time="2025-07-07T06:15:42.252525426Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:15:42.252562 containerd[1734]: time="2025-07-07T06:15:42.252542646Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:15:42.252562 containerd[1734]: time="2025-07-07T06:15:42.252553981Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:15:42.252629 containerd[1734]: time="2025-07-07T06:15:42.252562813Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:15:42.252649 containerd[1734]: time="2025-07-07T06:15:42.252628075Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:15:42.253388 containerd[1734]: time="2025-07-07T06:15:42.252796814Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:15:42.253388 containerd[1734]: time="2025-07-07T06:15:42.252853526Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:15:42.253388 containerd[1734]: time="2025-07-07T06:15:42.252866278Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:15:42.253388 containerd[1734]: time="2025-07-07T06:15:42.252901665Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:15:42.253388 containerd[1734]: time="2025-07-07T06:15:42.253142184Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:15:42.253388 containerd[1734]: time="2025-07-07T06:15:42.253198501Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:15:42.276743 containerd[1734]: time="2025-07-07T06:15:42.276713577Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:15:42.276914 containerd[1734]: time="2025-07-07T06:15:42.276860207Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:15:42.276914 containerd[1734]: time="2025-07-07T06:15:42.276880163Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:15:42.277041 containerd[1734]: time="2025-07-07T06:15:42.276895878Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:15:42.277041 containerd[1734]: time="2025-07-07T06:15:42.277024991Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:15:42.277122 containerd[1734]: time="2025-07-07T06:15:42.277111923Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:15:42.277177 containerd[1734]: time="2025-07-07T06:15:42.277162962Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:15:42.277229 containerd[1734]: time="2025-07-07T06:15:42.277220578Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:15:42.277270 containerd[1734]: time="2025-07-07T06:15:42.277262405Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:15:42.277320 containerd[1734]: time="2025-07-07T06:15:42.277312883Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:15:42.277373 containerd[1734]: time="2025-07-07T06:15:42.277364559Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:15:42.277421 containerd[1734]: time="2025-07-07T06:15:42.277412371Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:15:42.277606 containerd[1734]: time="2025-07-07T06:15:42.277562099Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:15:42.277606 containerd[1734]: time="2025-07-07T06:15:42.277587783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:15:42.277671 containerd[1734]: time="2025-07-07T06:15:42.277663458Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:15:42.277731 containerd[1734]: time="2025-07-07T06:15:42.277705959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:15:42.277731 containerd[1734]: time="2025-07-07T06:15:42.277717293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:15:42.277813 containerd[1734]: time="2025-07-07T06:15:42.277786308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:15:42.277857 containerd[1734]: time="2025-07-07T06:15:42.277801361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:15:42.277905 containerd[1734]: time="2025-07-07T06:15:42.277882552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:15:42.277936 containerd[1734]: time="2025-07-07T06:15:42.277907418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:15:42.277936 containerd[1734]: time="2025-07-07T06:15:42.277920641Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:15:42.277979 containerd[1734]: time="2025-07-07T06:15:42.277934058Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:15:42.278027 containerd[1734]: time="2025-07-07T06:15:42.278011148Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:15:42.278051 containerd[1734]: time="2025-07-07T06:15:42.278029962Z" level=info msg="Start snapshots syncer" Jul 7 06:15:42.278070 containerd[1734]: time="2025-07-07T06:15:42.278056260Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:15:42.278524 containerd[1734]: time="2025-07-07T06:15:42.278414016Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:15:42.278524 containerd[1734]: time="2025-07-07T06:15:42.278470325Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:15:42.278817 containerd[1734]: time="2025-07-07T06:15:42.278744248Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:15:42.278952 containerd[1734]: time="2025-07-07T06:15:42.278930852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:15:42.279018 containerd[1734]: time="2025-07-07T06:15:42.279001734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:15:42.279065 containerd[1734]: time="2025-07-07T06:15:42.279057256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:15:42.279116 containerd[1734]: time="2025-07-07T06:15:42.279107001Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:15:42.279201 containerd[1734]: time="2025-07-07T06:15:42.279169428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:15:42.279201 containerd[1734]: time="2025-07-07T06:15:42.279185131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:15:42.279300 containerd[1734]: time="2025-07-07T06:15:42.279255503Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:15:42.279300 containerd[1734]: time="2025-07-07T06:15:42.279284763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:15:42.279300 containerd[1734]: time="2025-07-07T06:15:42.279299860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:15:42.279376 containerd[1734]: time="2025-07-07T06:15:42.279315009Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:15:42.279376 containerd[1734]: time="2025-07-07T06:15:42.279362649Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:15:42.279419 containerd[1734]: time="2025-07-07T06:15:42.279385175Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:15:42.279453 containerd[1734]: time="2025-07-07T06:15:42.279395084Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:15:42.279478 containerd[1734]: time="2025-07-07T06:15:42.279450895Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:15:42.279478 containerd[1734]: time="2025-07-07T06:15:42.279459618Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:15:42.279478 containerd[1734]: time="2025-07-07T06:15:42.279469653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:15:42.279539 containerd[1734]: time="2025-07-07T06:15:42.279491294Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:15:42.279539 containerd[1734]: time="2025-07-07T06:15:42.279509084Z" level=info msg="runtime interface created" Jul 7 06:15:42.279539 containerd[1734]: time="2025-07-07T06:15:42.279514880Z" level=info msg="created NRI interface" Jul 7 06:15:42.279539 containerd[1734]: time="2025-07-07T06:15:42.279524210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:15:42.279539 containerd[1734]: time="2025-07-07T06:15:42.279536257Z" level=info msg="Connect containerd service" Jul 7 06:15:42.279644 containerd[1734]: time="2025-07-07T06:15:42.279573321Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:15:42.280794 containerd[1734]: time="2025-07-07T06:15:42.280517463Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:15:42.700871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:15:42.712021 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:15:42.886053 containerd[1734]: time="2025-07-07T06:15:42.882700205Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:15:42.886053 containerd[1734]: time="2025-07-07T06:15:42.882778310Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:15:42.886053 containerd[1734]: time="2025-07-07T06:15:42.882825357Z" level=info msg="Start subscribing containerd event" Jul 7 06:15:42.886053 containerd[1734]: time="2025-07-07T06:15:42.882860147Z" level=info msg="Start recovering state" Jul 7 06:15:42.886053 containerd[1734]: time="2025-07-07T06:15:42.882983286Z" level=info msg="Start event monitor" Jul 7 06:15:42.886053 containerd[1734]: time="2025-07-07T06:15:42.882998864Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:15:42.886053 containerd[1734]: time="2025-07-07T06:15:42.883007021Z" level=info msg="Start streaming server" Jul 7 06:15:42.886053 containerd[1734]: time="2025-07-07T06:15:42.883017714Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:15:42.886053 containerd[1734]: time="2025-07-07T06:15:42.883026910Z" level=info msg="runtime interface starting up..." Jul 7 06:15:42.886053 containerd[1734]: time="2025-07-07T06:15:42.883034441Z" level=info msg="starting plugins..." Jul 7 06:15:42.886053 containerd[1734]: time="2025-07-07T06:15:42.883048846Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:15:42.886053 containerd[1734]: time="2025-07-07T06:15:42.883361295Z" level=info msg="containerd successfully booted in 0.641954s" Jul 7 06:15:42.883229 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:15:42.886176 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:15:42.892035 systemd[1]: Startup finished in 3.205s (kernel) + 17.190s (initrd) + 15.612s (userspace) = 36.008s. Jul 7 06:15:43.095840 login[1828]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 06:15:43.097857 login[1829]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 06:15:43.104199 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:15:43.105350 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:15:43.115453 systemd-logind[1706]: New session 2 of user core. Jul 7 06:15:43.119978 systemd-logind[1706]: New session 1 of user core. Jul 7 06:15:43.130952 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:15:43.134415 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:15:43.145794 (systemd)[1863]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:15:43.148070 systemd-logind[1706]: New session c1 of user core. Jul 7 06:15:43.314116 systemd[1863]: Queued start job for default target default.target. Jul 7 06:15:43.321469 systemd[1863]: Created slice app.slice - User Application Slice. Jul 7 06:15:43.321499 systemd[1863]: Reached target paths.target - Paths. Jul 7 06:15:43.321527 systemd[1863]: Reached target timers.target - Timers. Jul 7 06:15:43.322446 systemd[1863]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:15:43.329560 systemd[1863]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:15:43.329609 systemd[1863]: Reached target sockets.target - Sockets. Jul 7 06:15:43.329641 systemd[1863]: Reached target basic.target - Basic System. Jul 7 06:15:43.329703 systemd[1863]: Reached target default.target - Main User Target. Jul 7 06:15:43.329728 systemd[1863]: Startup finished in 175ms. Jul 7 06:15:43.329746 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:15:43.331416 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:15:43.332742 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:15:43.440999 kubelet[1848]: E0707 06:15:43.440743 1848 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:15:43.445942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:15:43.446077 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:15:43.446681 systemd[1]: kubelet.service: Consumed 966ms CPU time, 268.6M memory peak. Jul 7 06:15:43.526324 waagent[1826]: 2025-07-07T06:15:43.526248Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 7 06:15:43.527667 waagent[1826]: 2025-07-07T06:15:43.527625Z INFO Daemon Daemon OS: flatcar 4372.0.1 Jul 7 06:15:43.528934 waagent[1826]: 2025-07-07T06:15:43.528783Z INFO Daemon Daemon Python: 3.11.12 Jul 7 06:15:43.530121 waagent[1826]: 2025-07-07T06:15:43.530071Z INFO Daemon Daemon Run daemon Jul 7 06:15:43.531345 waagent[1826]: 2025-07-07T06:15:43.531309Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.0.1' Jul 7 06:15:43.533392 waagent[1826]: 2025-07-07T06:15:43.533324Z INFO Daemon Daemon Using waagent for provisioning Jul 7 06:15:43.534619 waagent[1826]: 2025-07-07T06:15:43.534588Z INFO Daemon Daemon Activate resource disk Jul 7 06:15:43.535725 waagent[1826]: 2025-07-07T06:15:43.535660Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 7 06:15:43.538499 waagent[1826]: 2025-07-07T06:15:43.538463Z INFO Daemon Daemon Found device: None Jul 7 06:15:43.539541 waagent[1826]: 2025-07-07T06:15:43.539510Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 7 06:15:43.540761 waagent[1826]: 2025-07-07T06:15:43.540039Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 7 06:15:43.543703 waagent[1826]: 2025-07-07T06:15:43.543663Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 06:15:43.545210 waagent[1826]: 2025-07-07T06:15:43.545180Z INFO Daemon Daemon Running default provisioning handler Jul 7 06:15:43.551611 waagent[1826]: 2025-07-07T06:15:43.551286Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 7 06:15:43.552015 waagent[1826]: 2025-07-07T06:15:43.551961Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 7 06:15:43.552244 waagent[1826]: 2025-07-07T06:15:43.552222Z INFO Daemon Daemon cloud-init is enabled: False Jul 7 06:15:43.552840 waagent[1826]: 2025-07-07T06:15:43.552801Z INFO Daemon Daemon Copying ovf-env.xml Jul 7 06:15:43.619511 waagent[1826]: 2025-07-07T06:15:43.619468Z INFO Daemon Daemon Successfully mounted dvd Jul 7 06:15:43.644844 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 7 06:15:43.645623 waagent[1826]: 2025-07-07T06:15:43.645303Z INFO Daemon Daemon Detect protocol endpoint Jul 7 06:15:43.646959 waagent[1826]: 2025-07-07T06:15:43.646915Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 06:15:43.648395 waagent[1826]: 2025-07-07T06:15:43.648366Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 7 06:15:43.649752 waagent[1826]: 2025-07-07T06:15:43.649547Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 7 06:15:43.651120 waagent[1826]: 2025-07-07T06:15:43.651085Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 7 06:15:43.652254 waagent[1826]: 2025-07-07T06:15:43.652223Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 7 06:15:43.669602 waagent[1826]: 2025-07-07T06:15:43.669572Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 7 06:15:43.670233 waagent[1826]: 2025-07-07T06:15:43.669853Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 7 06:15:43.670233 waagent[1826]: 2025-07-07T06:15:43.669978Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 7 06:15:43.750992 waagent[1826]: 2025-07-07T06:15:43.750903Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 7 06:15:43.752337 waagent[1826]: 2025-07-07T06:15:43.752256Z INFO Daemon Daemon Forcing an update of the goal state. Jul 7 06:15:43.766747 waagent[1826]: 2025-07-07T06:15:43.766712Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 06:15:43.785481 waagent[1826]: 2025-07-07T06:15:43.785452Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 7 06:15:43.787062 waagent[1826]: 2025-07-07T06:15:43.787021Z INFO Daemon Jul 7 06:15:43.787608 waagent[1826]: 2025-07-07T06:15:43.787332Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f489dd8e-7b0d-4ab5-805a-b281e8ce679b eTag: 741244245078694882 source: Fabric] Jul 7 06:15:43.790345 waagent[1826]: 2025-07-07T06:15:43.790315Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 7 06:15:43.792088 waagent[1826]: 2025-07-07T06:15:43.791918Z INFO Daemon Jul 7 06:15:43.792655 waagent[1826]: 2025-07-07T06:15:43.792621Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 7 06:15:43.797890 waagent[1826]: 2025-07-07T06:15:43.797860Z INFO Daemon Daemon Downloading artifacts profile blob Jul 7 06:15:43.884574 waagent[1826]: 2025-07-07T06:15:43.884527Z INFO Daemon Downloaded certificate {'thumbprint': '5A8A9008C599266507DD6204580D449D3E4337D2', 'hasPrivateKey': True} Jul 7 06:15:43.887320 waagent[1826]: 2025-07-07T06:15:43.887282Z INFO Daemon Fetch goal state completed Jul 7 06:15:43.895895 waagent[1826]: 2025-07-07T06:15:43.895849Z INFO Daemon Daemon Starting provisioning Jul 7 06:15:43.896584 waagent[1826]: 2025-07-07T06:15:43.896270Z INFO Daemon Daemon Handle ovf-env.xml. Jul 7 06:15:43.897002 waagent[1826]: 2025-07-07T06:15:43.896969Z INFO Daemon Daemon Set hostname [ci-4372.0.1-a-ca7a3a169f] Jul 7 06:15:43.900419 waagent[1826]: 2025-07-07T06:15:43.900380Z INFO Daemon Daemon Publish hostname [ci-4372.0.1-a-ca7a3a169f] Jul 7 06:15:43.900747 waagent[1826]: 2025-07-07T06:15:43.900651Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 7 06:15:43.900747 waagent[1826]: 2025-07-07T06:15:43.900892Z INFO Daemon Daemon Primary interface is [eth0] Jul 7 06:15:43.908319 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:15:43.908325 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:15:43.908348 systemd-networkd[1361]: eth0: DHCP lease lost Jul 7 06:15:43.909261 waagent[1826]: 2025-07-07T06:15:43.909220Z INFO Daemon Daemon Create user account if not exists Jul 7 06:15:43.910699 waagent[1826]: 2025-07-07T06:15:43.910183Z INFO Daemon Daemon User core already exists, skip useradd Jul 7 06:15:43.910699 waagent[1826]: 2025-07-07T06:15:43.910307Z INFO Daemon Daemon Configure sudoer Jul 7 06:15:43.915769 waagent[1826]: 2025-07-07T06:15:43.915722Z INFO Daemon Daemon Configure sshd Jul 7 06:15:43.921437 waagent[1826]: 2025-07-07T06:15:43.921394Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 7 06:15:43.926357 waagent[1826]: 2025-07-07T06:15:43.921544Z INFO Daemon Daemon Deploy ssh public key. Jul 7 06:15:43.929873 systemd-networkd[1361]: eth0: DHCPv4 address 10.200.4.8/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jul 7 06:15:44.988687 waagent[1826]: 2025-07-07T06:15:44.988612Z INFO Daemon Daemon Provisioning complete Jul 7 06:15:45.002195 waagent[1826]: 2025-07-07T06:15:45.002151Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 7 06:15:45.002721 waagent[1826]: 2025-07-07T06:15:45.002412Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 7 06:15:45.002721 waagent[1826]: 2025-07-07T06:15:45.002618Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 7 06:15:45.106945 waagent[1916]: 2025-07-07T06:15:45.106873Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 7 06:15:45.107246 waagent[1916]: 2025-07-07T06:15:45.106979Z INFO ExtHandler ExtHandler OS: flatcar 4372.0.1 Jul 7 06:15:45.107246 waagent[1916]: 2025-07-07T06:15:45.107023Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 7 06:15:45.107246 waagent[1916]: 2025-07-07T06:15:45.107066Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jul 7 06:15:45.144448 waagent[1916]: 2025-07-07T06:15:45.144398Z INFO ExtHandler ExtHandler Distro: flatcar-4372.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 7 06:15:45.144580 waagent[1916]: 2025-07-07T06:15:45.144554Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 06:15:45.144623 waagent[1916]: 2025-07-07T06:15:45.144609Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 06:15:45.155456 waagent[1916]: 2025-07-07T06:15:45.155403Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 06:15:45.165445 waagent[1916]: 2025-07-07T06:15:45.165413Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 7 06:15:45.165767 waagent[1916]: 2025-07-07T06:15:45.165741Z INFO ExtHandler Jul 7 06:15:45.165824 waagent[1916]: 2025-07-07T06:15:45.165792Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7304f5b1-5acb-4c5b-909a-abfe0a1e7a34 eTag: 741244245078694882 source: Fabric] Jul 7 06:15:45.166034 waagent[1916]: 2025-07-07T06:15:45.166007Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 7 06:15:45.166364 waagent[1916]: 2025-07-07T06:15:45.166337Z INFO ExtHandler Jul 7 06:15:45.166402 waagent[1916]: 2025-07-07T06:15:45.166377Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 7 06:15:45.173204 waagent[1916]: 2025-07-07T06:15:45.173179Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 7 06:15:45.243217 waagent[1916]: 2025-07-07T06:15:45.243138Z INFO ExtHandler Downloaded certificate {'thumbprint': '5A8A9008C599266507DD6204580D449D3E4337D2', 'hasPrivateKey': True} Jul 7 06:15:45.243495 waagent[1916]: 2025-07-07T06:15:45.243469Z INFO ExtHandler Fetch goal state completed Jul 7 06:15:45.261448 waagent[1916]: 2025-07-07T06:15:45.261404Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 7 06:15:45.265605 waagent[1916]: 2025-07-07T06:15:45.265557Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1916 Jul 7 06:15:45.265714 waagent[1916]: 2025-07-07T06:15:45.265674Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 7 06:15:45.265968 waagent[1916]: 2025-07-07T06:15:45.265946Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 7 06:15:45.266852 waagent[1916]: 2025-07-07T06:15:45.266788Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 7 06:15:45.267112 waagent[1916]: 2025-07-07T06:15:45.267085Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 7 06:15:45.267203 waagent[1916]: 2025-07-07T06:15:45.267182Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 7 06:15:45.267538 waagent[1916]: 2025-07-07T06:15:45.267514Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 7 06:15:45.284961 waagent[1916]: 2025-07-07T06:15:45.284937Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 7 06:15:45.285087 waagent[1916]: 2025-07-07T06:15:45.285067Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 7 06:15:45.290789 waagent[1916]: 2025-07-07T06:15:45.290666Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 7 06:15:45.296262 systemd[1]: Reload requested from client PID 1931 ('systemctl') (unit waagent.service)... Jul 7 06:15:45.296276 systemd[1]: Reloading... Jul 7 06:15:45.370866 zram_generator::config[1972]: No configuration found. Jul 7 06:15:45.446057 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:15:45.538138 systemd[1]: Reloading finished in 241 ms. Jul 7 06:15:45.554727 waagent[1916]: 2025-07-07T06:15:45.554084Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 7 06:15:45.554727 waagent[1916]: 2025-07-07T06:15:45.554190Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 7 06:15:45.652430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jul 7 06:15:45.913737 waagent[1916]: 2025-07-07T06:15:45.913639Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 7 06:15:45.913991 waagent[1916]: 2025-07-07T06:15:45.913961Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 7 06:15:45.914745 waagent[1916]: 2025-07-07T06:15:45.914714Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 7 06:15:45.914915 waagent[1916]: 2025-07-07T06:15:45.914892Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 06:15:45.914979 waagent[1916]: 2025-07-07T06:15:45.914955Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 06:15:45.915191 waagent[1916]: 2025-07-07T06:15:45.915170Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 7 06:15:45.915373 waagent[1916]: 2025-07-07T06:15:45.915344Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 7 06:15:45.915581 waagent[1916]: 2025-07-07T06:15:45.915552Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 06:15:45.915847 waagent[1916]: 2025-07-07T06:15:45.915799Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 7 06:15:45.915847 waagent[1916]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 7 06:15:45.915847 waagent[1916]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jul 7 06:15:45.915847 waagent[1916]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 7 06:15:45.915847 waagent[1916]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 7 06:15:45.915847 waagent[1916]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 06:15:45.915847 waagent[1916]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 06:15:45.916009 waagent[1916]: 2025-07-07T06:15:45.915888Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 06:15:45.916043 waagent[1916]: 2025-07-07T06:15:45.916021Z INFO EnvHandler ExtHandler Configure routes Jul 7 06:15:45.916324 waagent[1916]: 2025-07-07T06:15:45.916292Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 7 06:15:45.916411 waagent[1916]: 2025-07-07T06:15:45.916390Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 7 06:15:45.916656 waagent[1916]: 2025-07-07T06:15:45.916631Z INFO EnvHandler ExtHandler Gateway:None Jul 7 06:15:45.916996 waagent[1916]: 2025-07-07T06:15:45.916963Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 7 06:15:45.917141 waagent[1916]: 2025-07-07T06:15:45.917112Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 7 06:15:45.917256 waagent[1916]: 2025-07-07T06:15:45.917234Z INFO EnvHandler ExtHandler Routes:None Jul 7 06:15:45.917900 waagent[1916]: 2025-07-07T06:15:45.917874Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 7 06:15:45.958288 waagent[1916]: 2025-07-07T06:15:45.958248Z INFO ExtHandler ExtHandler Jul 7 06:15:45.958358 waagent[1916]: 2025-07-07T06:15:45.958328Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: bb3039eb-fa5a-4c36-a8d7-83b47f62cd7b correlation b2dd7689-f8ca-4cc0-9812-547f87b31b94 created: 2025-07-07T06:14:37.110245Z] Jul 7 06:15:45.958829 waagent[1916]: 2025-07-07T06:15:45.958751Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 7 06:15:45.959378 waagent[1916]: 2025-07-07T06:15:45.959326Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 7 06:15:45.967702 waagent[1916]: 2025-07-07T06:15:45.967659Z INFO MonitorHandler ExtHandler Network interfaces: Jul 7 06:15:45.967702 waagent[1916]: Executing ['ip', '-a', '-o', 'link']: Jul 7 06:15:45.967702 waagent[1916]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 7 06:15:45.967702 waagent[1916]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:10:15:57 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jul 7 06:15:45.967702 waagent[1916]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:10:15:57 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jul 7 06:15:45.967702 waagent[1916]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 7 06:15:45.967702 waagent[1916]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 7 06:15:45.967702 waagent[1916]: 2: eth0 inet 10.200.4.8/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 7 06:15:45.967702 waagent[1916]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 7 06:15:45.967702 waagent[1916]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 7 06:15:45.967702 waagent[1916]: 2: eth0 inet6 fe80::6245:bdff:fe10:1557/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 06:15:45.967702 waagent[1916]: 3: enP30832s1 inet6 fe80::6245:bdff:fe10:1557/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 06:15:45.990770 waagent[1916]: 2025-07-07T06:15:45.990726Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 7 06:15:45.990770 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:15:45.990770 waagent[1916]: pkts bytes target prot opt in out source destination Jul 7 06:15:45.990770 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:15:45.990770 waagent[1916]: pkts bytes target prot opt in out source destination Jul 7 06:15:45.990770 waagent[1916]: Chain OUTPUT (policy ACCEPT 5 packets, 2836 bytes) Jul 7 06:15:45.990770 waagent[1916]: pkts bytes target prot opt in out source destination Jul 7 06:15:45.990770 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 06:15:45.990770 waagent[1916]: 3 164 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 06:15:45.990770 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 06:15:45.993529 waagent[1916]: 2025-07-07T06:15:45.993485Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 7 06:15:45.993529 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:15:45.993529 waagent[1916]: pkts bytes target prot opt in out source destination Jul 7 06:15:45.993529 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:15:45.993529 waagent[1916]: pkts bytes target prot opt in out source destination Jul 7 06:15:45.993529 waagent[1916]: Chain OUTPUT (policy ACCEPT 5 packets, 2836 bytes) Jul 7 06:15:45.993529 waagent[1916]: pkts bytes target prot opt in out source destination Jul 7 06:15:45.993529 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 06:15:45.993529 waagent[1916]: 7 512 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 06:15:45.993529 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 06:15:45.997119 waagent[1916]: 2025-07-07T06:15:45.997079Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 7 06:15:45.997119 waagent[1916]: Try `iptables -h' or 'iptables --help' for more information.) Jul 7 06:15:45.997439 waagent[1916]: 2025-07-07T06:15:45.997418Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: EE9AC3E4-BF4D-4346-B2B8-6BE751139F3E;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 7 06:15:53.697315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:15:53.699637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:15:54.291059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:15:54.297126 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:15:54.335187 kubelet[2067]: E0707 06:15:54.335133 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:15:54.338288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:15:54.338418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:15:54.338778 systemd[1]: kubelet.service: Consumed 148ms CPU time, 108M memory peak. Jul 7 06:15:54.545389 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:15:54.546479 systemd[1]: Started sshd@0-10.200.4.8:22-10.200.16.10:47136.service - OpenSSH per-connection server daemon (10.200.16.10:47136). Jul 7 06:15:55.216681 sshd[2075]: Accepted publickey for core from 10.200.16.10 port 47136 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:15:55.218098 sshd-session[2075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:55.222867 systemd-logind[1706]: New session 3 of user core. Jul 7 06:15:55.229932 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:15:55.745403 systemd[1]: Started sshd@1-10.200.4.8:22-10.200.16.10:47148.service - OpenSSH per-connection server daemon (10.200.16.10:47148). Jul 7 06:15:56.340627 sshd[2080]: Accepted publickey for core from 10.200.16.10 port 47148 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:15:56.342217 sshd-session[2080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:56.347465 systemd-logind[1706]: New session 4 of user core. Jul 7 06:15:56.352984 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:15:56.779148 sshd[2082]: Connection closed by 10.200.16.10 port 47148 Jul 7 06:15:56.780174 sshd-session[2080]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:56.784307 systemd[1]: sshd@1-10.200.4.8:22-10.200.16.10:47148.service: Deactivated successfully. Jul 7 06:15:56.786154 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:15:56.786896 systemd-logind[1706]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:15:56.788323 systemd-logind[1706]: Removed session 4. Jul 7 06:15:56.884964 systemd[1]: Started sshd@2-10.200.4.8:22-10.200.16.10:47164.service - OpenSSH per-connection server daemon (10.200.16.10:47164). Jul 7 06:15:57.473542 sshd[2088]: Accepted publickey for core from 10.200.16.10 port 47164 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:15:57.475091 sshd-session[2088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:57.480254 systemd-logind[1706]: New session 5 of user core. Jul 7 06:15:57.486004 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:15:57.893776 sshd[2090]: Connection closed by 10.200.16.10 port 47164 Jul 7 06:15:57.894415 sshd-session[2088]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:57.898288 systemd[1]: sshd@2-10.200.4.8:22-10.200.16.10:47164.service: Deactivated successfully. Jul 7 06:15:57.900039 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:15:57.900748 systemd-logind[1706]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:15:57.902108 systemd-logind[1706]: Removed session 5. Jul 7 06:15:58.002231 systemd[1]: Started sshd@3-10.200.4.8:22-10.200.16.10:47174.service - OpenSSH per-connection server daemon (10.200.16.10:47174). Jul 7 06:15:58.600448 sshd[2096]: Accepted publickey for core from 10.200.16.10 port 47174 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:15:58.602035 sshd-session[2096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:58.607227 systemd-logind[1706]: New session 6 of user core. Jul 7 06:15:58.614957 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:15:59.025341 sshd[2098]: Connection closed by 10.200.16.10 port 47174 Jul 7 06:15:59.026156 sshd-session[2096]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:59.029899 systemd[1]: sshd@3-10.200.4.8:22-10.200.16.10:47174.service: Deactivated successfully. Jul 7 06:15:59.031450 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:15:59.032107 systemd-logind[1706]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:15:59.033346 systemd-logind[1706]: Removed session 6. Jul 7 06:15:59.137952 systemd[1]: Started sshd@4-10.200.4.8:22-10.200.16.10:47178.service - OpenSSH per-connection server daemon (10.200.16.10:47178). Jul 7 06:15:59.730536 sshd[2104]: Accepted publickey for core from 10.200.16.10 port 47178 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:15:59.731905 sshd-session[2104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:59.736565 systemd-logind[1706]: New session 7 of user core. Jul 7 06:15:59.746965 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:16:00.162028 sudo[2107]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:16:00.162275 sudo[2107]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:16:00.190522 sudo[2107]: pam_unix(sudo:session): session closed for user root Jul 7 06:16:00.283669 sshd[2106]: Connection closed by 10.200.16.10 port 47178 Jul 7 06:16:00.284481 sshd-session[2104]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:00.288743 systemd[1]: sshd@4-10.200.4.8:22-10.200.16.10:47178.service: Deactivated successfully. Jul 7 06:16:00.290532 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:16:00.291321 systemd-logind[1706]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:16:00.292659 systemd-logind[1706]: Removed session 7. Jul 7 06:16:00.393415 systemd[1]: Started sshd@5-10.200.4.8:22-10.200.16.10:48624.service - OpenSSH per-connection server daemon (10.200.16.10:48624). Jul 7 06:16:00.982636 sshd[2113]: Accepted publickey for core from 10.200.16.10 port 48624 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:00.984115 sshd-session[2113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:00.988588 systemd-logind[1706]: New session 8 of user core. Jul 7 06:16:00.996969 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:16:01.307474 sudo[2117]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:16:01.307697 sudo[2117]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:16:01.315386 sudo[2117]: pam_unix(sudo:session): session closed for user root Jul 7 06:16:01.319264 sudo[2116]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:16:01.319473 sudo[2116]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:16:01.327512 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:16:01.359293 augenrules[2139]: No rules Jul 7 06:16:01.360461 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:16:01.360681 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:16:01.361556 sudo[2116]: pam_unix(sudo:session): session closed for user root Jul 7 06:16:01.468580 sshd[2115]: Connection closed by 10.200.16.10 port 48624 Jul 7 06:16:01.469120 sshd-session[2113]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:01.472208 systemd[1]: sshd@5-10.200.4.8:22-10.200.16.10:48624.service: Deactivated successfully. Jul 7 06:16:01.473709 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:16:01.474946 systemd-logind[1706]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:16:01.476056 systemd-logind[1706]: Removed session 8. Jul 7 06:16:01.573124 systemd[1]: Started sshd@6-10.200.4.8:22-10.200.16.10:48632.service - OpenSSH per-connection server daemon (10.200.16.10:48632). Jul 7 06:16:02.167149 sshd[2148]: Accepted publickey for core from 10.200.16.10 port 48632 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:02.168606 sshd-session[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:02.173734 systemd-logind[1706]: New session 9 of user core. Jul 7 06:16:02.181965 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:16:02.494327 sudo[2151]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:16:02.494588 sudo[2151]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:16:03.555563 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:16:03.569115 (dockerd)[2169]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:16:04.145130 dockerd[2169]: time="2025-07-07T06:16:04.145067944Z" level=info msg="Starting up" Jul 7 06:16:04.145855 dockerd[2169]: time="2025-07-07T06:16:04.145831005Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:16:04.332282 dockerd[2169]: time="2025-07-07T06:16:04.332234945Z" level=info msg="Loading containers: start." Jul 7 06:16:04.368829 kernel: Initializing XFRM netlink socket Jul 7 06:16:04.582509 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:16:04.585843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:16:04.611581 systemd-networkd[1361]: docker0: Link UP Jul 7 06:16:04.656828 dockerd[2169]: time="2025-07-07T06:16:04.655066612Z" level=info msg="Loading containers: done." Jul 7 06:16:04.806757 dockerd[2169]: time="2025-07-07T06:16:04.806702368Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:16:04.806925 dockerd[2169]: time="2025-07-07T06:16:04.806827029Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:16:04.806955 dockerd[2169]: time="2025-07-07T06:16:04.806936959Z" level=info msg="Initializing buildkit" Jul 7 06:16:04.994100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:16:05.000019 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:16:05.036768 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:16:05.110861 kubelet[2343]: E0707 06:16:05.035144 2343 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:16:05.036894 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:16:05.037177 systemd[1]: kubelet.service: Consumed 146ms CPU time, 110.4M memory peak. Jul 7 06:16:05.121880 chronyd[1728]: Selected source PHC0 Jul 7 06:16:05.150096 dockerd[2169]: time="2025-07-07T06:16:05.150060223Z" level=info msg="Completed buildkit initialization" Jul 7 06:16:05.157640 dockerd[2169]: time="2025-07-07T06:16:05.157612007Z" level=info msg="Daemon has completed initialization" Jul 7 06:16:05.157974 dockerd[2169]: time="2025-07-07T06:16:05.157716396Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:16:05.157783 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:16:06.202508 containerd[1734]: time="2025-07-07T06:16:06.202456786Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 06:16:07.099884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2559766903.mount: Deactivated successfully. Jul 7 06:16:08.307994 containerd[1734]: time="2025-07-07T06:16:08.307933411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:08.310757 containerd[1734]: time="2025-07-07T06:16:08.310717259Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079107" Jul 7 06:16:08.319730 containerd[1734]: time="2025-07-07T06:16:08.319684342Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:08.328321 containerd[1734]: time="2025-07-07T06:16:08.328277979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:08.329231 containerd[1734]: time="2025-07-07T06:16:08.328977473Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.126473735s" Jul 7 06:16:08.329231 containerd[1734]: time="2025-07-07T06:16:08.329022097Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 7 06:16:08.329906 containerd[1734]: time="2025-07-07T06:16:08.329872335Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 06:16:09.808350 containerd[1734]: time="2025-07-07T06:16:09.808286594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:09.811799 containerd[1734]: time="2025-07-07T06:16:09.811755349Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018954" Jul 7 06:16:09.815086 containerd[1734]: time="2025-07-07T06:16:09.815059852Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:09.820365 containerd[1734]: time="2025-07-07T06:16:09.820305864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:09.821442 containerd[1734]: time="2025-07-07T06:16:09.821017517Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.491109664s" Jul 7 06:16:09.821442 containerd[1734]: time="2025-07-07T06:16:09.821057646Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 7 06:16:09.821915 containerd[1734]: time="2025-07-07T06:16:09.821894216Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 06:16:11.028583 containerd[1734]: time="2025-07-07T06:16:11.028521221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:11.033010 containerd[1734]: time="2025-07-07T06:16:11.032975979Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155063" Jul 7 06:16:11.036090 containerd[1734]: time="2025-07-07T06:16:11.036048630Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:11.040447 containerd[1734]: time="2025-07-07T06:16:11.040399515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:11.041186 containerd[1734]: time="2025-07-07T06:16:11.040973880Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.219049703s" Jul 7 06:16:11.041186 containerd[1734]: time="2025-07-07T06:16:11.041010754Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 7 06:16:11.041720 containerd[1734]: time="2025-07-07T06:16:11.041685074Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 06:16:12.104800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1419142665.mount: Deactivated successfully. Jul 7 06:16:12.477603 containerd[1734]: time="2025-07-07T06:16:12.477483645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:12.480923 containerd[1734]: time="2025-07-07T06:16:12.480881307Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892754" Jul 7 06:16:12.484706 containerd[1734]: time="2025-07-07T06:16:12.484661176Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:12.492274 containerd[1734]: time="2025-07-07T06:16:12.492232051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:12.492727 containerd[1734]: time="2025-07-07T06:16:12.492533036Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.450810207s" Jul 7 06:16:12.492727 containerd[1734]: time="2025-07-07T06:16:12.492567803Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 7 06:16:12.493220 containerd[1734]: time="2025-07-07T06:16:12.493200491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 06:16:13.273602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1572886316.mount: Deactivated successfully. Jul 7 06:16:14.273131 containerd[1734]: time="2025-07-07T06:16:14.273077778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:14.275857 containerd[1734]: time="2025-07-07T06:16:14.275816130Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jul 7 06:16:14.279854 containerd[1734]: time="2025-07-07T06:16:14.279799779Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:14.284463 containerd[1734]: time="2025-07-07T06:16:14.284403036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:14.285438 containerd[1734]: time="2025-07-07T06:16:14.285275273Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.792045587s" Jul 7 06:16:14.285438 containerd[1734]: time="2025-07-07T06:16:14.285311502Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 7 06:16:14.286038 containerd[1734]: time="2025-07-07T06:16:14.285992652Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:16:14.939536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2530993997.mount: Deactivated successfully. Jul 7 06:16:14.970935 containerd[1734]: time="2025-07-07T06:16:14.970894687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:16:14.973276 containerd[1734]: time="2025-07-07T06:16:14.973243045Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 7 06:16:14.984101 containerd[1734]: time="2025-07-07T06:16:14.984062138Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:16:15.000311 containerd[1734]: time="2025-07-07T06:16:15.000254550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:16:15.000814 containerd[1734]: time="2025-07-07T06:16:15.000776460Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 714.751394ms" Jul 7 06:16:15.000853 containerd[1734]: time="2025-07-07T06:16:15.000818530Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:16:15.001485 containerd[1734]: time="2025-07-07T06:16:15.001465919Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 06:16:15.081897 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 06:16:15.083378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:16:15.616860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:16:15.624022 (kubelet)[2516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:16:15.660650 kubelet[2516]: E0707 06:16:15.660601 2516 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:16:15.662084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:16:15.662233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:16:15.662562 systemd[1]: kubelet.service: Consumed 141ms CPU time, 110.4M memory peak. Jul 7 06:16:15.977135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1718997046.mount: Deactivated successfully. Jul 7 06:16:17.698569 containerd[1734]: time="2025-07-07T06:16:17.698501114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:17.700775 containerd[1734]: time="2025-07-07T06:16:17.700743118Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247183" Jul 7 06:16:17.704050 containerd[1734]: time="2025-07-07T06:16:17.704001984Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:17.708687 containerd[1734]: time="2025-07-07T06:16:17.708638714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:17.709478 containerd[1734]: time="2025-07-07T06:16:17.709323385Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.707831774s" Jul 7 06:16:17.709478 containerd[1734]: time="2025-07-07T06:16:17.709355633Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 7 06:16:19.781074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:16:19.781311 systemd[1]: kubelet.service: Consumed 141ms CPU time, 110.4M memory peak. Jul 7 06:16:19.783965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:16:19.805769 systemd[1]: Reload requested from client PID 2606 ('systemctl') (unit session-9.scope)... Jul 7 06:16:19.805788 systemd[1]: Reloading... Jul 7 06:16:19.901835 zram_generator::config[2661]: No configuration found. Jul 7 06:16:19.979487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:16:20.076763 systemd[1]: Reloading finished in 270 ms. Jul 7 06:16:20.112754 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:16:20.112846 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:16:20.113094 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:16:20.113136 systemd[1]: kubelet.service: Consumed 86ms CPU time, 78.6M memory peak. Jul 7 06:16:20.114993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:16:20.782200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:16:20.788089 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:16:20.826029 kubelet[2719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:16:20.826029 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:16:20.826029 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:16:20.826288 kubelet[2719]: I0707 06:16:20.826079 2719 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:16:21.078707 kubelet[2719]: I0707 06:16:21.078634 2719 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:16:21.078707 kubelet[2719]: I0707 06:16:21.078652 2719 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:16:21.079099 kubelet[2719]: I0707 06:16:21.079080 2719 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:16:21.104562 kubelet[2719]: E0707 06:16:21.104380 2719 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.8:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 06:16:21.104562 kubelet[2719]: I0707 06:16:21.104425 2719 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:16:21.111100 kubelet[2719]: I0707 06:16:21.111084 2719 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:16:21.116158 kubelet[2719]: I0707 06:16:21.116140 2719 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:16:21.116443 kubelet[2719]: I0707 06:16:21.116421 2719 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:16:21.116612 kubelet[2719]: I0707 06:16:21.116442 2719 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-ca7a3a169f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:16:21.116747 kubelet[2719]: I0707 06:16:21.116618 2719 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:16:21.116747 kubelet[2719]: I0707 06:16:21.116629 2719 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:16:21.118243 kubelet[2719]: I0707 06:16:21.118232 2719 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:16:21.120161 kubelet[2719]: I0707 06:16:21.120132 2719 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:16:21.120222 kubelet[2719]: I0707 06:16:21.120170 2719 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:16:21.120222 kubelet[2719]: I0707 06:16:21.120195 2719 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:16:21.120222 kubelet[2719]: I0707 06:16:21.120205 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:16:21.126847 kubelet[2719]: E0707 06:16:21.126499 2719 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-ca7a3a169f&limit=500&resourceVersion=0\": dial tcp 10.200.4.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 06:16:21.126847 kubelet[2719]: E0707 06:16:21.126571 2719 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 06:16:21.126951 kubelet[2719]: I0707 06:16:21.126878 2719 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:16:21.127343 kubelet[2719]: I0707 06:16:21.127330 2719 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:16:21.127978 kubelet[2719]: W0707 06:16:21.127965 2719 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:16:21.130906 kubelet[2719]: I0707 06:16:21.130300 2719 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:16:21.130906 kubelet[2719]: I0707 06:16:21.130352 2719 server.go:1289] "Started kubelet" Jul 7 06:16:21.134212 kubelet[2719]: I0707 06:16:21.134197 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:16:21.140092 kubelet[2719]: I0707 06:16:21.140061 2719 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:16:21.141248 kubelet[2719]: I0707 06:16:21.141197 2719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:16:21.141634 kubelet[2719]: I0707 06:16:21.141615 2719 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:16:21.145498 kubelet[2719]: I0707 06:16:21.145481 2719 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:16:21.148085 kubelet[2719]: I0707 06:16:21.148060 2719 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:16:21.150001 kubelet[2719]: I0707 06:16:21.149982 2719 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:16:21.150212 kubelet[2719]: E0707 06:16:21.150194 2719 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-ca7a3a169f\" not found" Jul 7 06:16:21.150706 kubelet[2719]: I0707 06:16:21.150686 2719 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:16:21.150757 kubelet[2719]: I0707 06:16:21.150742 2719 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:16:21.151458 kubelet[2719]: E0707 06:16:21.151433 2719 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 06:16:21.151531 kubelet[2719]: E0707 06:16:21.151503 2719 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-ca7a3a169f?timeout=10s\": dial tcp 10.200.4.8:6443: connect: connection refused" interval="200ms" Jul 7 06:16:21.153246 kubelet[2719]: E0707 06:16:21.151565 2719 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.8:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.8:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-a-ca7a3a169f.184fe38e794c6f74 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-a-ca7a3a169f,UID:ci-4372.0.1-a-ca7a3a169f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-a-ca7a3a169f,},FirstTimestamp:2025-07-07 06:16:21.130325876 +0000 UTC m=+0.338065543,LastTimestamp:2025-07-07 06:16:21.130325876 +0000 UTC m=+0.338065543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-a-ca7a3a169f,}" Jul 7 06:16:21.154950 kubelet[2719]: E0707 06:16:21.154522 2719 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:16:21.155177 kubelet[2719]: I0707 06:16:21.155165 2719 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:16:21.155234 kubelet[2719]: I0707 06:16:21.155229 2719 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:16:21.155324 kubelet[2719]: I0707 06:16:21.155313 2719 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:16:21.159359 kubelet[2719]: I0707 06:16:21.159329 2719 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:16:21.160372 kubelet[2719]: I0707 06:16:21.160356 2719 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:16:21.160453 kubelet[2719]: I0707 06:16:21.160446 2719 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:16:21.160521 kubelet[2719]: I0707 06:16:21.160513 2719 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:16:21.160560 kubelet[2719]: I0707 06:16:21.160555 2719 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:16:21.160643 kubelet[2719]: E0707 06:16:21.160624 2719 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:16:21.166117 kubelet[2719]: E0707 06:16:21.166098 2719 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 06:16:21.183978 kubelet[2719]: I0707 06:16:21.183961 2719 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:16:21.183978 kubelet[2719]: I0707 06:16:21.183972 2719 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:16:21.184080 kubelet[2719]: I0707 06:16:21.184006 2719 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:16:21.188706 kubelet[2719]: I0707 06:16:21.188692 2719 policy_none.go:49] "None policy: Start" Jul 7 06:16:21.188756 kubelet[2719]: I0707 06:16:21.188709 2719 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:16:21.188756 kubelet[2719]: I0707 06:16:21.188720 2719 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:16:21.200500 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:16:21.212324 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:16:21.215199 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:16:21.220470 kubelet[2719]: E0707 06:16:21.220397 2719 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:16:21.220586 kubelet[2719]: I0707 06:16:21.220566 2719 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:16:21.220626 kubelet[2719]: I0707 06:16:21.220582 2719 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:16:21.220949 kubelet[2719]: I0707 06:16:21.220938 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:16:21.221950 kubelet[2719]: E0707 06:16:21.221933 2719 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:16:21.222078 kubelet[2719]: E0707 06:16:21.221988 2719 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-a-ca7a3a169f\" not found" Jul 7 06:16:21.274078 systemd[1]: Created slice kubepods-burstable-pod0c7383b6776b16e9f3e1defe6f85824c.slice - libcontainer container kubepods-burstable-pod0c7383b6776b16e9f3e1defe6f85824c.slice. Jul 7 06:16:21.281303 kubelet[2719]: E0707 06:16:21.281284 2719 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-ca7a3a169f\" not found" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.286726 systemd[1]: Created slice kubepods-burstable-pod3aecc88260d744202c8c6bfbe88af62f.slice - libcontainer container kubepods-burstable-pod3aecc88260d744202c8c6bfbe88af62f.slice. Jul 7 06:16:21.297610 kubelet[2719]: E0707 06:16:21.297542 2719 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-ca7a3a169f\" not found" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.299775 systemd[1]: Created slice kubepods-burstable-poda804543d44cdc77fc5870c8104c10471.slice - libcontainer container kubepods-burstable-poda804543d44cdc77fc5870c8104c10471.slice. Jul 7 06:16:21.301788 kubelet[2719]: E0707 06:16:21.301765 2719 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-ca7a3a169f\" not found" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.322026 kubelet[2719]: I0707 06:16:21.321995 2719 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.322320 kubelet[2719]: E0707 06:16:21.322289 2719 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.8:6443/api/v1/nodes\": dial tcp 10.200.4.8:6443: connect: connection refused" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.351800 kubelet[2719]: I0707 06:16:21.351669 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3aecc88260d744202c8c6bfbe88af62f-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-ca7a3a169f\" (UID: \"3aecc88260d744202c8c6bfbe88af62f\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.351800 kubelet[2719]: I0707 06:16:21.351711 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3aecc88260d744202c8c6bfbe88af62f-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-ca7a3a169f\" (UID: \"3aecc88260d744202c8c6bfbe88af62f\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.351800 kubelet[2719]: I0707 06:16:21.351744 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c7383b6776b16e9f3e1defe6f85824c-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-ca7a3a169f\" (UID: \"0c7383b6776b16e9f3e1defe6f85824c\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.351800 kubelet[2719]: I0707 06:16:21.351766 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c7383b6776b16e9f3e1defe6f85824c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-ca7a3a169f\" (UID: \"0c7383b6776b16e9f3e1defe6f85824c\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.351800 kubelet[2719]: E0707 06:16:21.351756 2719 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-ca7a3a169f?timeout=10s\": dial tcp 10.200.4.8:6443: connect: connection refused" interval="400ms" Jul 7 06:16:21.352015 kubelet[2719]: I0707 06:16:21.351789 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3aecc88260d744202c8c6bfbe88af62f-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-ca7a3a169f\" (UID: \"3aecc88260d744202c8c6bfbe88af62f\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.352015 kubelet[2719]: I0707 06:16:21.351820 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3aecc88260d744202c8c6bfbe88af62f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-ca7a3a169f\" (UID: \"3aecc88260d744202c8c6bfbe88af62f\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.352015 kubelet[2719]: I0707 06:16:21.351837 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a804543d44cdc77fc5870c8104c10471-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-ca7a3a169f\" (UID: \"a804543d44cdc77fc5870c8104c10471\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.352015 kubelet[2719]: I0707 06:16:21.351852 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c7383b6776b16e9f3e1defe6f85824c-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-ca7a3a169f\" (UID: \"0c7383b6776b16e9f3e1defe6f85824c\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.352015 kubelet[2719]: I0707 06:16:21.351869 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3aecc88260d744202c8c6bfbe88af62f-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-ca7a3a169f\" (UID: \"3aecc88260d744202c8c6bfbe88af62f\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.524698 kubelet[2719]: I0707 06:16:21.524659 2719 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.525001 kubelet[2719]: E0707 06:16:21.524973 2719 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.8:6443/api/v1/nodes\": dial tcp 10.200.4.8:6443: connect: connection refused" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.584338 containerd[1734]: time="2025-07-07T06:16:21.584268482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-ca7a3a169f,Uid:0c7383b6776b16e9f3e1defe6f85824c,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:21.599827 containerd[1734]: time="2025-07-07T06:16:21.599045317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-ca7a3a169f,Uid:3aecc88260d744202c8c6bfbe88af62f,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:21.602632 containerd[1734]: time="2025-07-07T06:16:21.602570270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-ca7a3a169f,Uid:a804543d44cdc77fc5870c8104c10471,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:21.684530 containerd[1734]: time="2025-07-07T06:16:21.684488434Z" level=info msg="connecting to shim 6522d1be793b5dd63fffe3e5d9eb3b7ddb4e43bd42a7a2c4a439ec4237ab0f78" address="unix:///run/containerd/s/217c2ad09dc50fafd7a5ed0db57ba1be51f575d70c1c2a2dfb02509220ba67db" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:21.708959 systemd[1]: Started cri-containerd-6522d1be793b5dd63fffe3e5d9eb3b7ddb4e43bd42a7a2c4a439ec4237ab0f78.scope - libcontainer container 6522d1be793b5dd63fffe3e5d9eb3b7ddb4e43bd42a7a2c4a439ec4237ab0f78. Jul 7 06:16:21.717582 containerd[1734]: time="2025-07-07T06:16:21.717448950Z" level=info msg="connecting to shim 805231e79c5d690567baa49a029e2cbcb7a417b2d2c831d72f4cf2d0ee592ab6" address="unix:///run/containerd/s/a471746578a3b8b11974eae64738825036ec6fb7af998819fe23f7e27918242d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:21.738555 containerd[1734]: time="2025-07-07T06:16:21.738527839Z" level=info msg="connecting to shim 52dfb270407abc04e6e1a3d536f3445e9de5afb913f02ecaabcd233b910b7fed" address="unix:///run/containerd/s/ee8576c4cc8ad6594c21724d1fd6760bca84bc558a2353ab05572b061b408fe6" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:21.752291 kubelet[2719]: E0707 06:16:21.752261 2719 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-ca7a3a169f?timeout=10s\": dial tcp 10.200.4.8:6443: connect: connection refused" interval="800ms" Jul 7 06:16:21.755965 systemd[1]: Started cri-containerd-805231e79c5d690567baa49a029e2cbcb7a417b2d2c831d72f4cf2d0ee592ab6.scope - libcontainer container 805231e79c5d690567baa49a029e2cbcb7a417b2d2c831d72f4cf2d0ee592ab6. Jul 7 06:16:21.764087 systemd[1]: Started cri-containerd-52dfb270407abc04e6e1a3d536f3445e9de5afb913f02ecaabcd233b910b7fed.scope - libcontainer container 52dfb270407abc04e6e1a3d536f3445e9de5afb913f02ecaabcd233b910b7fed. Jul 7 06:16:21.792047 containerd[1734]: time="2025-07-07T06:16:21.792017724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-ca7a3a169f,Uid:0c7383b6776b16e9f3e1defe6f85824c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6522d1be793b5dd63fffe3e5d9eb3b7ddb4e43bd42a7a2c4a439ec4237ab0f78\"" Jul 7 06:16:21.802348 containerd[1734]: time="2025-07-07T06:16:21.802302543Z" level=info msg="CreateContainer within sandbox \"6522d1be793b5dd63fffe3e5d9eb3b7ddb4e43bd42a7a2c4a439ec4237ab0f78\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:16:21.827760 containerd[1734]: time="2025-07-07T06:16:21.827735697Z" level=info msg="Container b3806b115bcd4913cc6efc8189e674fc5d1f803827904cde3968af4e41969d88: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:21.834224 containerd[1734]: time="2025-07-07T06:16:21.834194414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-ca7a3a169f,Uid:3aecc88260d744202c8c6bfbe88af62f,Namespace:kube-system,Attempt:0,} returns sandbox id \"805231e79c5d690567baa49a029e2cbcb7a417b2d2c831d72f4cf2d0ee592ab6\"" Jul 7 06:16:21.837775 containerd[1734]: time="2025-07-07T06:16:21.837746249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-ca7a3a169f,Uid:a804543d44cdc77fc5870c8104c10471,Namespace:kube-system,Attempt:0,} returns sandbox id \"52dfb270407abc04e6e1a3d536f3445e9de5afb913f02ecaabcd233b910b7fed\"" Jul 7 06:16:21.847225 containerd[1734]: time="2025-07-07T06:16:21.847207334Z" level=info msg="CreateContainer within sandbox \"805231e79c5d690567baa49a029e2cbcb7a417b2d2c831d72f4cf2d0ee592ab6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:16:21.852147 containerd[1734]: time="2025-07-07T06:16:21.852126414Z" level=info msg="CreateContainer within sandbox \"52dfb270407abc04e6e1a3d536f3445e9de5afb913f02ecaabcd233b910b7fed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:16:21.880009 containerd[1734]: time="2025-07-07T06:16:21.879715802Z" level=info msg="CreateContainer within sandbox \"6522d1be793b5dd63fffe3e5d9eb3b7ddb4e43bd42a7a2c4a439ec4237ab0f78\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b3806b115bcd4913cc6efc8189e674fc5d1f803827904cde3968af4e41969d88\"" Jul 7 06:16:21.880596 containerd[1734]: time="2025-07-07T06:16:21.880569003Z" level=info msg="StartContainer for \"b3806b115bcd4913cc6efc8189e674fc5d1f803827904cde3968af4e41969d88\"" Jul 7 06:16:21.881525 containerd[1734]: time="2025-07-07T06:16:21.881500887Z" level=info msg="connecting to shim b3806b115bcd4913cc6efc8189e674fc5d1f803827904cde3968af4e41969d88" address="unix:///run/containerd/s/217c2ad09dc50fafd7a5ed0db57ba1be51f575d70c1c2a2dfb02509220ba67db" protocol=ttrpc version=3 Jul 7 06:16:21.890405 containerd[1734]: time="2025-07-07T06:16:21.890375548Z" level=info msg="Container cc4af77c7e136346e6ae80c13a2413648854b3b5a853efc813df6371999a51bb: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:21.897685 containerd[1734]: time="2025-07-07T06:16:21.897660633Z" level=info msg="Container 6d8de0f57ca461de43e8d3d991008be5aa5ab760b63abd02cfcdff808942f5b3: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:21.898957 systemd[1]: Started cri-containerd-b3806b115bcd4913cc6efc8189e674fc5d1f803827904cde3968af4e41969d88.scope - libcontainer container b3806b115bcd4913cc6efc8189e674fc5d1f803827904cde3968af4e41969d88. Jul 7 06:16:21.911708 containerd[1734]: time="2025-07-07T06:16:21.911680199Z" level=info msg="CreateContainer within sandbox \"52dfb270407abc04e6e1a3d536f3445e9de5afb913f02ecaabcd233b910b7fed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cc4af77c7e136346e6ae80c13a2413648854b3b5a853efc813df6371999a51bb\"" Jul 7 06:16:21.912091 containerd[1734]: time="2025-07-07T06:16:21.912073802Z" level=info msg="StartContainer for \"cc4af77c7e136346e6ae80c13a2413648854b3b5a853efc813df6371999a51bb\"" Jul 7 06:16:21.913117 containerd[1734]: time="2025-07-07T06:16:21.913095166Z" level=info msg="connecting to shim cc4af77c7e136346e6ae80c13a2413648854b3b5a853efc813df6371999a51bb" address="unix:///run/containerd/s/ee8576c4cc8ad6594c21724d1fd6760bca84bc558a2353ab05572b061b408fe6" protocol=ttrpc version=3 Jul 7 06:16:21.929822 containerd[1734]: time="2025-07-07T06:16:21.929531382Z" level=info msg="CreateContainer within sandbox \"805231e79c5d690567baa49a029e2cbcb7a417b2d2c831d72f4cf2d0ee592ab6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6d8de0f57ca461de43e8d3d991008be5aa5ab760b63abd02cfcdff808942f5b3\"" Jul 7 06:16:21.930022 systemd[1]: Started cri-containerd-cc4af77c7e136346e6ae80c13a2413648854b3b5a853efc813df6371999a51bb.scope - libcontainer container cc4af77c7e136346e6ae80c13a2413648854b3b5a853efc813df6371999a51bb. Jul 7 06:16:21.930096 kubelet[2719]: I0707 06:16:21.930018 2719 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.931429 kubelet[2719]: E0707 06:16:21.930323 2719 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.8:6443/api/v1/nodes\": dial tcp 10.200.4.8:6443: connect: connection refused" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:21.931500 containerd[1734]: time="2025-07-07T06:16:21.930860679Z" level=info msg="StartContainer for \"6d8de0f57ca461de43e8d3d991008be5aa5ab760b63abd02cfcdff808942f5b3\"" Jul 7 06:16:21.933410 containerd[1734]: time="2025-07-07T06:16:21.933386614Z" level=info msg="connecting to shim 6d8de0f57ca461de43e8d3d991008be5aa5ab760b63abd02cfcdff808942f5b3" address="unix:///run/containerd/s/a471746578a3b8b11974eae64738825036ec6fb7af998819fe23f7e27918242d" protocol=ttrpc version=3 Jul 7 06:16:21.964935 systemd[1]: Started cri-containerd-6d8de0f57ca461de43e8d3d991008be5aa5ab760b63abd02cfcdff808942f5b3.scope - libcontainer container 6d8de0f57ca461de43e8d3d991008be5aa5ab760b63abd02cfcdff808942f5b3. Jul 7 06:16:21.980274 containerd[1734]: time="2025-07-07T06:16:21.980245477Z" level=info msg="StartContainer for \"b3806b115bcd4913cc6efc8189e674fc5d1f803827904cde3968af4e41969d88\" returns successfully" Jul 7 06:16:22.006914 kubelet[2719]: E0707 06:16:22.006876 2719 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 06:16:22.012850 containerd[1734]: time="2025-07-07T06:16:22.012829177Z" level=info msg="StartContainer for \"cc4af77c7e136346e6ae80c13a2413648854b3b5a853efc813df6371999a51bb\" returns successfully" Jul 7 06:16:22.048281 containerd[1734]: time="2025-07-07T06:16:22.048173557Z" level=info msg="StartContainer for \"6d8de0f57ca461de43e8d3d991008be5aa5ab760b63abd02cfcdff808942f5b3\" returns successfully" Jul 7 06:16:22.189865 kubelet[2719]: E0707 06:16:22.189219 2719 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-ca7a3a169f\" not found" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:22.192829 kubelet[2719]: E0707 06:16:22.192668 2719 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-ca7a3a169f\" not found" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:22.194487 kubelet[2719]: E0707 06:16:22.194468 2719 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-ca7a3a169f\" not found" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:22.734199 kubelet[2719]: I0707 06:16:22.734170 2719 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:23.198648 kubelet[2719]: E0707 06:16:23.198607 2719 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-ca7a3a169f\" not found" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:23.200166 kubelet[2719]: E0707 06:16:23.199079 2719 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-ca7a3a169f\" not found" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:23.443601 kubelet[2719]: E0707 06:16:23.443573 2719 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-ca7a3a169f\" not found" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:23.821204 kubelet[2719]: E0707 06:16:23.821157 2719 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.1-a-ca7a3a169f\" not found" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:23.909171 kubelet[2719]: E0707 06:16:23.909055 2719 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4372.0.1-a-ca7a3a169f.184fe38e794c6f74 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-a-ca7a3a169f,UID:ci-4372.0.1-a-ca7a3a169f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-a-ca7a3a169f,},FirstTimestamp:2025-07-07 06:16:21.130325876 +0000 UTC m=+0.338065543,LastTimestamp:2025-07-07 06:16:21.130325876 +0000 UTC m=+0.338065543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-a-ca7a3a169f,}" Jul 7 06:16:23.966391 kubelet[2719]: I0707 06:16:23.965138 2719 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:23.966391 kubelet[2719]: E0707 06:16:23.965173 2719 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.0.1-a-ca7a3a169f\": node \"ci-4372.0.1-a-ca7a3a169f\" not found" Jul 7 06:16:24.050830 kubelet[2719]: I0707 06:16:24.050635 2719 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:24.067375 kubelet[2719]: E0707 06:16:24.067344 2719 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-a-ca7a3a169f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:24.067597 kubelet[2719]: I0707 06:16:24.067475 2719 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:24.070131 kubelet[2719]: E0707 06:16:24.070111 2719 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-a-ca7a3a169f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:24.070349 kubelet[2719]: I0707 06:16:24.070228 2719 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:24.072246 kubelet[2719]: E0707 06:16:24.071872 2719 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-a-ca7a3a169f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:24.125310 kubelet[2719]: I0707 06:16:24.125288 2719 apiserver.go:52] "Watching apiserver" Jul 7 06:16:24.150951 kubelet[2719]: I0707 06:16:24.150926 2719 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:16:25.738353 systemd[1]: Reload requested from client PID 3000 ('systemctl') (unit session-9.scope)... Jul 7 06:16:25.738367 systemd[1]: Reloading... Jul 7 06:16:25.813852 zram_generator::config[3049]: No configuration found. Jul 7 06:16:25.896675 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:16:26.000777 systemd[1]: Reloading finished in 262 ms. Jul 7 06:16:26.029112 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:16:26.054169 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:16:26.054407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:16:26.054451 systemd[1]: kubelet.service: Consumed 662ms CPU time, 129.7M memory peak. Jul 7 06:16:26.056522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:16:26.208034 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 7 06:16:26.531874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:16:26.539098 (kubelet)[3113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:16:26.580435 kubelet[3113]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:16:26.580435 kubelet[3113]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:16:26.580435 kubelet[3113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:16:26.580869 kubelet[3113]: I0707 06:16:26.580647 3113 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:16:26.587681 kubelet[3113]: I0707 06:16:26.587653 3113 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:16:26.587900 kubelet[3113]: I0707 06:16:26.587746 3113 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:16:26.588009 kubelet[3113]: I0707 06:16:26.587991 3113 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:16:26.588958 kubelet[3113]: I0707 06:16:26.588941 3113 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 06:16:26.591089 kubelet[3113]: I0707 06:16:26.591047 3113 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:16:26.595492 kubelet[3113]: I0707 06:16:26.595377 3113 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:16:26.600146 kubelet[3113]: I0707 06:16:26.600128 3113 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:16:26.600430 kubelet[3113]: I0707 06:16:26.600409 3113 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:16:26.600707 kubelet[3113]: I0707 06:16:26.600484 3113 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-ca7a3a169f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:16:26.600834 kubelet[3113]: I0707 06:16:26.600795 3113 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:16:26.600834 kubelet[3113]: I0707 06:16:26.600819 3113 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:16:26.600880 kubelet[3113]: I0707 06:16:26.600871 3113 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:16:26.601034 kubelet[3113]: I0707 06:16:26.601015 3113 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:16:26.601034 kubelet[3113]: I0707 06:16:26.601031 3113 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:16:26.601083 kubelet[3113]: I0707 06:16:26.601056 3113 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:16:26.601083 kubelet[3113]: I0707 06:16:26.601072 3113 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:16:26.609826 kubelet[3113]: I0707 06:16:26.606931 3113 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:16:26.609826 kubelet[3113]: I0707 06:16:26.607422 3113 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:16:26.612272 kubelet[3113]: I0707 06:16:26.612258 3113 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:16:26.612381 kubelet[3113]: I0707 06:16:26.612374 3113 server.go:1289] "Started kubelet" Jul 7 06:16:26.614966 kubelet[3113]: I0707 06:16:26.614943 3113 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:16:26.619544 kubelet[3113]: I0707 06:16:26.619519 3113 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:16:26.620390 kubelet[3113]: I0707 06:16:26.620369 3113 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:16:26.624606 kubelet[3113]: I0707 06:16:26.624224 3113 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:16:26.624606 kubelet[3113]: I0707 06:16:26.624421 3113 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:16:26.624963 kubelet[3113]: I0707 06:16:26.624618 3113 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:16:26.627367 kubelet[3113]: I0707 06:16:26.627348 3113 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:16:26.629823 kubelet[3113]: I0707 06:16:26.629287 3113 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:16:26.629823 kubelet[3113]: I0707 06:16:26.629401 3113 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:16:26.631289 kubelet[3113]: I0707 06:16:26.631261 3113 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:16:26.632612 kubelet[3113]: I0707 06:16:26.632591 3113 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:16:26.632752 kubelet[3113]: I0707 06:16:26.632620 3113 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:16:26.632752 kubelet[3113]: I0707 06:16:26.632639 3113 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:16:26.632752 kubelet[3113]: I0707 06:16:26.632645 3113 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:16:26.632752 kubelet[3113]: E0707 06:16:26.632679 3113 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:16:26.634120 kubelet[3113]: E0707 06:16:26.634026 3113 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:16:26.638695 kubelet[3113]: I0707 06:16:26.638676 3113 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:16:26.638778 kubelet[3113]: I0707 06:16:26.638771 3113 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:16:26.638910 kubelet[3113]: I0707 06:16:26.638896 3113 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:16:26.699867 kubelet[3113]: I0707 06:16:26.699851 3113 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:16:26.699867 kubelet[3113]: I0707 06:16:26.699866 3113 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:16:26.700217 kubelet[3113]: I0707 06:16:26.699883 3113 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:16:26.700217 kubelet[3113]: I0707 06:16:26.700011 3113 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:16:26.700217 kubelet[3113]: I0707 06:16:26.700021 3113 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:16:26.700217 kubelet[3113]: I0707 06:16:26.700055 3113 policy_none.go:49] "None policy: Start" Jul 7 06:16:26.700217 kubelet[3113]: I0707 06:16:26.700073 3113 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:16:26.700217 kubelet[3113]: I0707 06:16:26.700085 3113 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:16:26.700217 kubelet[3113]: I0707 06:16:26.700211 3113 state_mem.go:75] "Updated machine memory state" Jul 7 06:16:26.704972 kubelet[3113]: E0707 06:16:26.704952 3113 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:16:26.705210 kubelet[3113]: I0707 06:16:26.705092 3113 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:16:26.705210 kubelet[3113]: I0707 06:16:26.705103 3113 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:16:26.706200 kubelet[3113]: I0707 06:16:26.706103 3113 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:16:26.707989 kubelet[3113]: E0707 06:16:26.707967 3113 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:16:26.734601 kubelet[3113]: I0707 06:16:26.734416 3113 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.735533 kubelet[3113]: I0707 06:16:26.735521 3113 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.735658 kubelet[3113]: I0707 06:16:26.735650 3113 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.742480 kubelet[3113]: I0707 06:16:26.742461 3113 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:16:26.746341 kubelet[3113]: I0707 06:16:26.746321 3113 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:16:26.746504 kubelet[3113]: I0707 06:16:26.746437 3113 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:16:26.812599 kubelet[3113]: I0707 06:16:26.812390 3113 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.823884 kubelet[3113]: I0707 06:16:26.823862 3113 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.823982 kubelet[3113]: I0707 06:16:26.823918 3113 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.884086 update_engine[1709]: I20250707 06:16:26.884028 1709 update_attempter.cc:509] Updating boot flags... Jul 7 06:16:26.930715 kubelet[3113]: I0707 06:16:26.930693 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c7383b6776b16e9f3e1defe6f85824c-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-ca7a3a169f\" (UID: \"0c7383b6776b16e9f3e1defe6f85824c\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.931084 kubelet[3113]: I0707 06:16:26.931034 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3aecc88260d744202c8c6bfbe88af62f-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-ca7a3a169f\" (UID: \"3aecc88260d744202c8c6bfbe88af62f\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.931084 kubelet[3113]: I0707 06:16:26.931064 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3aecc88260d744202c8c6bfbe88af62f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-ca7a3a169f\" (UID: \"3aecc88260d744202c8c6bfbe88af62f\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.932037 kubelet[3113]: I0707 06:16:26.931204 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a804543d44cdc77fc5870c8104c10471-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-ca7a3a169f\" (UID: \"a804543d44cdc77fc5870c8104c10471\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.932037 kubelet[3113]: I0707 06:16:26.931879 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c7383b6776b16e9f3e1defe6f85824c-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-ca7a3a169f\" (UID: \"0c7383b6776b16e9f3e1defe6f85824c\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.932037 kubelet[3113]: I0707 06:16:26.931929 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c7383b6776b16e9f3e1defe6f85824c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-ca7a3a169f\" (UID: \"0c7383b6776b16e9f3e1defe6f85824c\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.932037 kubelet[3113]: I0707 06:16:26.931949 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3aecc88260d744202c8c6bfbe88af62f-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-ca7a3a169f\" (UID: \"3aecc88260d744202c8c6bfbe88af62f\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.932037 kubelet[3113]: I0707 06:16:26.931970 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3aecc88260d744202c8c6bfbe88af62f-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-ca7a3a169f\" (UID: \"3aecc88260d744202c8c6bfbe88af62f\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:26.932204 kubelet[3113]: I0707 06:16:26.932013 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3aecc88260d744202c8c6bfbe88af62f-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-ca7a3a169f\" (UID: \"3aecc88260d744202c8c6bfbe88af62f\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:27.604070 kubelet[3113]: I0707 06:16:27.604034 3113 apiserver.go:52] "Watching apiserver" Jul 7 06:16:27.630430 kubelet[3113]: I0707 06:16:27.630397 3113 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:16:27.684382 kubelet[3113]: I0707 06:16:27.684319 3113 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:27.685171 kubelet[3113]: I0707 06:16:27.684553 3113 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:27.695045 kubelet[3113]: I0707 06:16:27.695023 3113 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:16:27.695144 kubelet[3113]: E0707 06:16:27.695093 3113 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-a-ca7a3a169f\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:27.696820 kubelet[3113]: I0707 06:16:27.696753 3113 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:16:27.697093 kubelet[3113]: E0707 06:16:27.696904 3113 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-a-ca7a3a169f\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-a-ca7a3a169f" Jul 7 06:16:27.714046 kubelet[3113]: I0707 06:16:27.713987 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-ca7a3a169f" podStartSLOduration=1.7139715 podStartE2EDuration="1.7139715s" podCreationTimestamp="2025-07-07 06:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:27.713641011 +0000 UTC m=+1.170086726" watchObservedRunningTime="2025-07-07 06:16:27.7139715 +0000 UTC m=+1.170417212" Jul 7 06:16:27.714175 kubelet[3113]: I0707 06:16:27.714150 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-a-ca7a3a169f" podStartSLOduration=1.7141438070000001 podStartE2EDuration="1.714143807s" podCreationTimestamp="2025-07-07 06:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:27.70535398 +0000 UTC m=+1.161799697" watchObservedRunningTime="2025-07-07 06:16:27.714143807 +0000 UTC m=+1.170589521" Jul 7 06:16:27.745371 kubelet[3113]: I0707 06:16:27.745332 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-a-ca7a3a169f" podStartSLOduration=1.7453200020000001 podStartE2EDuration="1.745320002s" podCreationTimestamp="2025-07-07 06:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:27.726963524 +0000 UTC m=+1.183409251" watchObservedRunningTime="2025-07-07 06:16:27.745320002 +0000 UTC m=+1.201765717" Jul 7 06:16:31.425973 kubelet[3113]: I0707 06:16:31.425935 3113 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:16:31.426605 containerd[1734]: time="2025-07-07T06:16:31.426319587Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:16:31.426994 kubelet[3113]: I0707 06:16:31.426923 3113 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:16:32.356608 systemd[1]: Created slice kubepods-besteffort-poda97096ff_23dd_46fa_ae43_b76f1c3f10a8.slice - libcontainer container kubepods-besteffort-poda97096ff_23dd_46fa_ae43_b76f1c3f10a8.slice. Jul 7 06:16:32.368796 kubelet[3113]: I0707 06:16:32.368632 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a97096ff-23dd-46fa-ae43-b76f1c3f10a8-lib-modules\") pod \"kube-proxy-x52t6\" (UID: \"a97096ff-23dd-46fa-ae43-b76f1c3f10a8\") " pod="kube-system/kube-proxy-x52t6" Jul 7 06:16:32.368796 kubelet[3113]: I0707 06:16:32.368675 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhbw5\" (UniqueName: \"kubernetes.io/projected/a97096ff-23dd-46fa-ae43-b76f1c3f10a8-kube-api-access-xhbw5\") pod \"kube-proxy-x52t6\" (UID: \"a97096ff-23dd-46fa-ae43-b76f1c3f10a8\") " pod="kube-system/kube-proxy-x52t6" Jul 7 06:16:32.368796 kubelet[3113]: I0707 06:16:32.368700 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a97096ff-23dd-46fa-ae43-b76f1c3f10a8-kube-proxy\") pod \"kube-proxy-x52t6\" (UID: \"a97096ff-23dd-46fa-ae43-b76f1c3f10a8\") " pod="kube-system/kube-proxy-x52t6" Jul 7 06:16:32.368796 kubelet[3113]: I0707 06:16:32.368722 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a97096ff-23dd-46fa-ae43-b76f1c3f10a8-xtables-lock\") pod \"kube-proxy-x52t6\" (UID: \"a97096ff-23dd-46fa-ae43-b76f1c3f10a8\") " pod="kube-system/kube-proxy-x52t6" Jul 7 06:16:32.669566 containerd[1734]: time="2025-07-07T06:16:32.669447621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x52t6,Uid:a97096ff-23dd-46fa-ae43-b76f1c3f10a8,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:32.677295 systemd[1]: Created slice kubepods-besteffort-pod70b1535a_53b4_49d9_b7fa_62799fd16862.slice - libcontainer container kubepods-besteffort-pod70b1535a_53b4_49d9_b7fa_62799fd16862.slice. Jul 7 06:16:32.731453 containerd[1734]: time="2025-07-07T06:16:32.731007307Z" level=info msg="connecting to shim 86a80683067db5e79fe1019c297b59eccfa3d537261e4ee0637a6cc557b34ee6" address="unix:///run/containerd/s/a8042723fa5ae22e2f5d90cb2a41cf5a456b4c21e652f53e1d78abf7b4177ccf" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:32.757086 systemd[1]: Started cri-containerd-86a80683067db5e79fe1019c297b59eccfa3d537261e4ee0637a6cc557b34ee6.scope - libcontainer container 86a80683067db5e79fe1019c297b59eccfa3d537261e4ee0637a6cc557b34ee6. Jul 7 06:16:32.770330 kubelet[3113]: I0707 06:16:32.770303 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/70b1535a-53b4-49d9-b7fa-62799fd16862-var-lib-calico\") pod \"tigera-operator-747864d56d-rhkmq\" (UID: \"70b1535a-53b4-49d9-b7fa-62799fd16862\") " pod="tigera-operator/tigera-operator-747864d56d-rhkmq" Jul 7 06:16:32.770831 kubelet[3113]: I0707 06:16:32.770339 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvwsq\" (UniqueName: \"kubernetes.io/projected/70b1535a-53b4-49d9-b7fa-62799fd16862-kube-api-access-tvwsq\") pod \"tigera-operator-747864d56d-rhkmq\" (UID: \"70b1535a-53b4-49d9-b7fa-62799fd16862\") " pod="tigera-operator/tigera-operator-747864d56d-rhkmq" Jul 7 06:16:32.782371 containerd[1734]: time="2025-07-07T06:16:32.782340182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x52t6,Uid:a97096ff-23dd-46fa-ae43-b76f1c3f10a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"86a80683067db5e79fe1019c297b59eccfa3d537261e4ee0637a6cc557b34ee6\"" Jul 7 06:16:32.797414 containerd[1734]: time="2025-07-07T06:16:32.797379830Z" level=info msg="CreateContainer within sandbox \"86a80683067db5e79fe1019c297b59eccfa3d537261e4ee0637a6cc557b34ee6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:16:32.818388 containerd[1734]: time="2025-07-07T06:16:32.818354661Z" level=info msg="Container 8daa33b2feffb754100f39f4acedf2f9570aeb1ddef7ce72398c068e0b83ee2b: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:32.822151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3192086527.mount: Deactivated successfully. Jul 7 06:16:32.843784 containerd[1734]: time="2025-07-07T06:16:32.843753678Z" level=info msg="CreateContainer within sandbox \"86a80683067db5e79fe1019c297b59eccfa3d537261e4ee0637a6cc557b34ee6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8daa33b2feffb754100f39f4acedf2f9570aeb1ddef7ce72398c068e0b83ee2b\"" Jul 7 06:16:32.844641 containerd[1734]: time="2025-07-07T06:16:32.844230129Z" level=info msg="StartContainer for \"8daa33b2feffb754100f39f4acedf2f9570aeb1ddef7ce72398c068e0b83ee2b\"" Jul 7 06:16:32.845717 containerd[1734]: time="2025-07-07T06:16:32.845688765Z" level=info msg="connecting to shim 8daa33b2feffb754100f39f4acedf2f9570aeb1ddef7ce72398c068e0b83ee2b" address="unix:///run/containerd/s/a8042723fa5ae22e2f5d90cb2a41cf5a456b4c21e652f53e1d78abf7b4177ccf" protocol=ttrpc version=3 Jul 7 06:16:32.864961 systemd[1]: Started cri-containerd-8daa33b2feffb754100f39f4acedf2f9570aeb1ddef7ce72398c068e0b83ee2b.scope - libcontainer container 8daa33b2feffb754100f39f4acedf2f9570aeb1ddef7ce72398c068e0b83ee2b. Jul 7 06:16:32.896787 containerd[1734]: time="2025-07-07T06:16:32.896752363Z" level=info msg="StartContainer for \"8daa33b2feffb754100f39f4acedf2f9570aeb1ddef7ce72398c068e0b83ee2b\" returns successfully" Jul 7 06:16:32.981559 containerd[1734]: time="2025-07-07T06:16:32.981186821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-rhkmq,Uid:70b1535a-53b4-49d9-b7fa-62799fd16862,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:16:33.034043 containerd[1734]: time="2025-07-07T06:16:33.034006682Z" level=info msg="connecting to shim 40a583cf3e721e4e0d838eac43c4ca36e624c4295e3e20cbdcff815c42a22e73" address="unix:///run/containerd/s/3ea29bb2eeb50a3fbc185e2bfd49ef839814cdae1705b56415e428714f52d777" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:33.055948 systemd[1]: Started cri-containerd-40a583cf3e721e4e0d838eac43c4ca36e624c4295e3e20cbdcff815c42a22e73.scope - libcontainer container 40a583cf3e721e4e0d838eac43c4ca36e624c4295e3e20cbdcff815c42a22e73. Jul 7 06:16:33.097830 containerd[1734]: time="2025-07-07T06:16:33.097788548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-rhkmq,Uid:70b1535a-53b4-49d9-b7fa-62799fd16862,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"40a583cf3e721e4e0d838eac43c4ca36e624c4295e3e20cbdcff815c42a22e73\"" Jul 7 06:16:33.100106 containerd[1734]: time="2025-07-07T06:16:33.100022659Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:16:33.712827 kubelet[3113]: I0707 06:16:33.712384 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x52t6" podStartSLOduration=1.712364585 podStartE2EDuration="1.712364585s" podCreationTimestamp="2025-07-07 06:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:33.712105228 +0000 UTC m=+7.168550941" watchObservedRunningTime="2025-07-07 06:16:33.712364585 +0000 UTC m=+7.168810301" Jul 7 06:16:34.947088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3351208530.mount: Deactivated successfully. Jul 7 06:16:35.414051 containerd[1734]: time="2025-07-07T06:16:35.414004929Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:35.419201 containerd[1734]: time="2025-07-07T06:16:35.419164569Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 06:16:35.426595 containerd[1734]: time="2025-07-07T06:16:35.426550759Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:35.435518 containerd[1734]: time="2025-07-07T06:16:35.435474741Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:35.436043 containerd[1734]: time="2025-07-07T06:16:35.435941132Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.335823408s" Jul 7 06:16:35.436043 containerd[1734]: time="2025-07-07T06:16:35.435970826Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 06:16:35.443638 containerd[1734]: time="2025-07-07T06:16:35.443600430Z" level=info msg="CreateContainer within sandbox \"40a583cf3e721e4e0d838eac43c4ca36e624c4295e3e20cbdcff815c42a22e73\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:16:35.474591 containerd[1734]: time="2025-07-07T06:16:35.472749827Z" level=info msg="Container 7dc237259ffee29dc604bff2d12c771ceb9fed110a4bba82375e4e9dc8acbf16: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:35.492295 containerd[1734]: time="2025-07-07T06:16:35.492270335Z" level=info msg="CreateContainer within sandbox \"40a583cf3e721e4e0d838eac43c4ca36e624c4295e3e20cbdcff815c42a22e73\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7dc237259ffee29dc604bff2d12c771ceb9fed110a4bba82375e4e9dc8acbf16\"" Jul 7 06:16:35.495287 containerd[1734]: time="2025-07-07T06:16:35.495260219Z" level=info msg="StartContainer for \"7dc237259ffee29dc604bff2d12c771ceb9fed110a4bba82375e4e9dc8acbf16\"" Jul 7 06:16:35.496216 containerd[1734]: time="2025-07-07T06:16:35.496192269Z" level=info msg="connecting to shim 7dc237259ffee29dc604bff2d12c771ceb9fed110a4bba82375e4e9dc8acbf16" address="unix:///run/containerd/s/3ea29bb2eeb50a3fbc185e2bfd49ef839814cdae1705b56415e428714f52d777" protocol=ttrpc version=3 Jul 7 06:16:35.518978 systemd[1]: Started cri-containerd-7dc237259ffee29dc604bff2d12c771ceb9fed110a4bba82375e4e9dc8acbf16.scope - libcontainer container 7dc237259ffee29dc604bff2d12c771ceb9fed110a4bba82375e4e9dc8acbf16. Jul 7 06:16:35.550446 containerd[1734]: time="2025-07-07T06:16:35.550406787Z" level=info msg="StartContainer for \"7dc237259ffee29dc604bff2d12c771ceb9fed110a4bba82375e4e9dc8acbf16\" returns successfully" Jul 7 06:16:35.712204 kubelet[3113]: I0707 06:16:35.711484 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-rhkmq" podStartSLOduration=1.3734697900000001 podStartE2EDuration="3.711467394s" podCreationTimestamp="2025-07-07 06:16:32 +0000 UTC" firstStartedPulling="2025-07-07 06:16:33.098710566 +0000 UTC m=+6.555156283" lastFinishedPulling="2025-07-07 06:16:35.43670817 +0000 UTC m=+8.893153887" observedRunningTime="2025-07-07 06:16:35.711287083 +0000 UTC m=+9.167732796" watchObservedRunningTime="2025-07-07 06:16:35.711467394 +0000 UTC m=+9.167913109" Jul 7 06:16:41.295206 sudo[2151]: pam_unix(sudo:session): session closed for user root Jul 7 06:16:41.401249 sshd[2150]: Connection closed by 10.200.16.10 port 48632 Jul 7 06:16:41.400286 sshd-session[2148]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:41.406044 systemd[1]: sshd@6-10.200.4.8:22-10.200.16.10:48632.service: Deactivated successfully. Jul 7 06:16:41.410558 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:16:41.411620 systemd[1]: session-9.scope: Consumed 3.667s CPU time, 232.3M memory peak. Jul 7 06:16:41.415145 systemd-logind[1706]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:16:41.418356 systemd-logind[1706]: Removed session 9. Jul 7 06:16:44.842151 kubelet[3113]: I0707 06:16:44.841675 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4nqp\" (UniqueName: \"kubernetes.io/projected/cc625048-6dcf-4f43-9f3c-28503c12f368-kube-api-access-t4nqp\") pod \"calico-typha-58c6c4b556-npp2q\" (UID: \"cc625048-6dcf-4f43-9f3c-28503c12f368\") " pod="calico-system/calico-typha-58c6c4b556-npp2q" Jul 7 06:16:44.842151 kubelet[3113]: I0707 06:16:44.841733 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc625048-6dcf-4f43-9f3c-28503c12f368-tigera-ca-bundle\") pod \"calico-typha-58c6c4b556-npp2q\" (UID: \"cc625048-6dcf-4f43-9f3c-28503c12f368\") " pod="calico-system/calico-typha-58c6c4b556-npp2q" Jul 7 06:16:44.842151 kubelet[3113]: I0707 06:16:44.841759 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cc625048-6dcf-4f43-9f3c-28503c12f368-typha-certs\") pod \"calico-typha-58c6c4b556-npp2q\" (UID: \"cc625048-6dcf-4f43-9f3c-28503c12f368\") " pod="calico-system/calico-typha-58c6c4b556-npp2q" Jul 7 06:16:44.850770 systemd[1]: Created slice kubepods-besteffort-podcc625048_6dcf_4f43_9f3c_28503c12f368.slice - libcontainer container kubepods-besteffort-podcc625048_6dcf_4f43_9f3c_28503c12f368.slice. Jul 7 06:16:45.167432 containerd[1734]: time="2025-07-07T06:16:45.166638063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c6c4b556-npp2q,Uid:cc625048-6dcf-4f43-9f3c-28503c12f368,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:45.173208 systemd[1]: Created slice kubepods-besteffort-poda2e47e2c_b10c_4075_9c68_b5919a63eac2.slice - libcontainer container kubepods-besteffort-poda2e47e2c_b10c_4075_9c68_b5919a63eac2.slice. Jul 7 06:16:45.235125 containerd[1734]: time="2025-07-07T06:16:45.235079413Z" level=info msg="connecting to shim 1843a93abdd9e6ba7c828be26c1cc4582df79b2564b0772d2707942f7091fc31" address="unix:///run/containerd/s/e61e2d3883b605b773705c6316d620030a48aec3a5d46f86c8c97bcdf8f9ec9c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:45.245406 kubelet[3113]: I0707 06:16:45.245068 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a2e47e2c-b10c-4075-9c68-b5919a63eac2-policysync\") pod \"calico-node-qmlst\" (UID: \"a2e47e2c-b10c-4075-9c68-b5919a63eac2\") " pod="calico-system/calico-node-qmlst" Jul 7 06:16:45.245406 kubelet[3113]: I0707 06:16:45.245132 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a2e47e2c-b10c-4075-9c68-b5919a63eac2-var-lib-calico\") pod \"calico-node-qmlst\" (UID: \"a2e47e2c-b10c-4075-9c68-b5919a63eac2\") " pod="calico-system/calico-node-qmlst" Jul 7 06:16:45.245406 kubelet[3113]: I0707 06:16:45.245154 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a2e47e2c-b10c-4075-9c68-b5919a63eac2-cni-bin-dir\") pod \"calico-node-qmlst\" (UID: \"a2e47e2c-b10c-4075-9c68-b5919a63eac2\") " pod="calico-system/calico-node-qmlst" Jul 7 06:16:45.245406 kubelet[3113]: I0707 06:16:45.245197 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a2e47e2c-b10c-4075-9c68-b5919a63eac2-node-certs\") pod \"calico-node-qmlst\" (UID: \"a2e47e2c-b10c-4075-9c68-b5919a63eac2\") " pod="calico-system/calico-node-qmlst" Jul 7 06:16:45.245406 kubelet[3113]: I0707 06:16:45.245225 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a2e47e2c-b10c-4075-9c68-b5919a63eac2-cni-log-dir\") pod \"calico-node-qmlst\" (UID: \"a2e47e2c-b10c-4075-9c68-b5919a63eac2\") " pod="calico-system/calico-node-qmlst" Jul 7 06:16:45.245692 kubelet[3113]: I0707 06:16:45.245319 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a2e47e2c-b10c-4075-9c68-b5919a63eac2-cni-net-dir\") pod \"calico-node-qmlst\" (UID: \"a2e47e2c-b10c-4075-9c68-b5919a63eac2\") " pod="calico-system/calico-node-qmlst" Jul 7 06:16:45.245692 kubelet[3113]: I0707 06:16:45.245349 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a2e47e2c-b10c-4075-9c68-b5919a63eac2-flexvol-driver-host\") pod \"calico-node-qmlst\" (UID: \"a2e47e2c-b10c-4075-9c68-b5919a63eac2\") " pod="calico-system/calico-node-qmlst" Jul 7 06:16:45.245692 kubelet[3113]: I0707 06:16:45.245380 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a2e47e2c-b10c-4075-9c68-b5919a63eac2-var-run-calico\") pod \"calico-node-qmlst\" (UID: \"a2e47e2c-b10c-4075-9c68-b5919a63eac2\") " pod="calico-system/calico-node-qmlst" Jul 7 06:16:45.245692 kubelet[3113]: I0707 06:16:45.245403 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2e47e2c-b10c-4075-9c68-b5919a63eac2-tigera-ca-bundle\") pod \"calico-node-qmlst\" (UID: \"a2e47e2c-b10c-4075-9c68-b5919a63eac2\") " pod="calico-system/calico-node-qmlst" Jul 7 06:16:45.245692 kubelet[3113]: I0707 06:16:45.245423 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4xl4\" (UniqueName: \"kubernetes.io/projected/a2e47e2c-b10c-4075-9c68-b5919a63eac2-kube-api-access-z4xl4\") pod \"calico-node-qmlst\" (UID: \"a2e47e2c-b10c-4075-9c68-b5919a63eac2\") " pod="calico-system/calico-node-qmlst" Jul 7 06:16:45.245913 kubelet[3113]: I0707 06:16:45.245444 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2e47e2c-b10c-4075-9c68-b5919a63eac2-lib-modules\") pod \"calico-node-qmlst\" (UID: \"a2e47e2c-b10c-4075-9c68-b5919a63eac2\") " pod="calico-system/calico-node-qmlst" Jul 7 06:16:45.245913 kubelet[3113]: I0707 06:16:45.245463 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2e47e2c-b10c-4075-9c68-b5919a63eac2-xtables-lock\") pod \"calico-node-qmlst\" (UID: \"a2e47e2c-b10c-4075-9c68-b5919a63eac2\") " pod="calico-system/calico-node-qmlst" Jul 7 06:16:45.261936 systemd[1]: Started cri-containerd-1843a93abdd9e6ba7c828be26c1cc4582df79b2564b0772d2707942f7091fc31.scope - libcontainer container 1843a93abdd9e6ba7c828be26c1cc4582df79b2564b0772d2707942f7091fc31. Jul 7 06:16:45.305408 containerd[1734]: time="2025-07-07T06:16:45.305381221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c6c4b556-npp2q,Uid:cc625048-6dcf-4f43-9f3c-28503c12f368,Namespace:calico-system,Attempt:0,} returns sandbox id \"1843a93abdd9e6ba7c828be26c1cc4582df79b2564b0772d2707942f7091fc31\"" Jul 7 06:16:45.306584 containerd[1734]: time="2025-07-07T06:16:45.306560971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:16:45.348613 kubelet[3113]: E0707 06:16:45.348100 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.348613 kubelet[3113]: W0707 06:16:45.348169 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.350904 kubelet[3113]: E0707 06:16:45.350826 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.353466 kubelet[3113]: E0707 06:16:45.353254 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.353466 kubelet[3113]: W0707 06:16:45.353270 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.353466 kubelet[3113]: E0707 06:16:45.353288 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.353715 kubelet[3113]: E0707 06:16:45.353707 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.353911 kubelet[3113]: W0707 06:16:45.353824 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.353911 kubelet[3113]: E0707 06:16:45.353840 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.354081 kubelet[3113]: E0707 06:16:45.354073 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.354246 kubelet[3113]: W0707 06:16:45.354220 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.354246 kubelet[3113]: E0707 06:16:45.354235 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.354720 kubelet[3113]: E0707 06:16:45.354655 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.354901 kubelet[3113]: W0707 06:16:45.354785 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.354901 kubelet[3113]: E0707 06:16:45.354800 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.355293 kubelet[3113]: E0707 06:16:45.355279 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.355342 kubelet[3113]: W0707 06:16:45.355294 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.355342 kubelet[3113]: E0707 06:16:45.355306 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.355514 kubelet[3113]: E0707 06:16:45.355487 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.355514 kubelet[3113]: W0707 06:16:45.355495 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.355583 kubelet[3113]: E0707 06:16:45.355518 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.355703 kubelet[3113]: E0707 06:16:45.355688 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.355703 kubelet[3113]: W0707 06:16:45.355698 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.355769 kubelet[3113]: E0707 06:16:45.355708 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.355873 kubelet[3113]: E0707 06:16:45.355862 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.355906 kubelet[3113]: W0707 06:16:45.355874 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.355906 kubelet[3113]: E0707 06:16:45.355882 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.356006 kubelet[3113]: E0707 06:16:45.355995 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.356006 kubelet[3113]: W0707 06:16:45.356003 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.356059 kubelet[3113]: E0707 06:16:45.356010 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.356144 kubelet[3113]: E0707 06:16:45.356134 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.356175 kubelet[3113]: W0707 06:16:45.356142 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.356175 kubelet[3113]: E0707 06:16:45.356151 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.356292 kubelet[3113]: E0707 06:16:45.356283 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.356292 kubelet[3113]: W0707 06:16:45.356290 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.356358 kubelet[3113]: E0707 06:16:45.356297 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.356823 kubelet[3113]: E0707 06:16:45.356543 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.356823 kubelet[3113]: W0707 06:16:45.356556 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.356823 kubelet[3113]: E0707 06:16:45.356568 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.356823 kubelet[3113]: E0707 06:16:45.356718 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.356823 kubelet[3113]: W0707 06:16:45.356723 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.356823 kubelet[3113]: E0707 06:16:45.356729 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.361667 kubelet[3113]: E0707 06:16:45.361650 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.361667 kubelet[3113]: W0707 06:16:45.361664 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.361765 kubelet[3113]: E0707 06:16:45.361677 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.409682 kubelet[3113]: E0707 06:16:45.409644 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pv69l" podUID="d40c33b8-3587-4ee3-89bf-e1ad24c997ef" Jul 7 06:16:45.441970 kubelet[3113]: E0707 06:16:45.441894 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.442147 kubelet[3113]: W0707 06:16:45.442044 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.442147 kubelet[3113]: E0707 06:16:45.442062 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.443923 kubelet[3113]: E0707 06:16:45.443903 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.444068 kubelet[3113]: W0707 06:16:45.444015 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.444068 kubelet[3113]: E0707 06:16:45.444046 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.444320 kubelet[3113]: E0707 06:16:45.444272 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.444320 kubelet[3113]: W0707 06:16:45.444280 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.444320 kubelet[3113]: E0707 06:16:45.444288 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.444592 kubelet[3113]: E0707 06:16:45.444540 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.444592 kubelet[3113]: W0707 06:16:45.444547 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.444592 kubelet[3113]: E0707 06:16:45.444555 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.444814 kubelet[3113]: E0707 06:16:45.444777 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.444814 kubelet[3113]: W0707 06:16:45.444786 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.444814 kubelet[3113]: E0707 06:16:45.444794 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.445030 kubelet[3113]: E0707 06:16:45.444994 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.445030 kubelet[3113]: W0707 06:16:45.445001 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.445030 kubelet[3113]: E0707 06:16:45.445009 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.445199 kubelet[3113]: E0707 06:16:45.445194 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.445276 kubelet[3113]: W0707 06:16:45.445227 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.445276 kubelet[3113]: E0707 06:16:45.445235 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.445453 kubelet[3113]: E0707 06:16:45.445405 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.445453 kubelet[3113]: W0707 06:16:45.445412 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.445453 kubelet[3113]: E0707 06:16:45.445419 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.445605 kubelet[3113]: E0707 06:16:45.445599 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.445666 kubelet[3113]: W0707 06:16:45.445639 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.445666 kubelet[3113]: E0707 06:16:45.445647 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.445816 kubelet[3113]: E0707 06:16:45.445777 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.445816 kubelet[3113]: W0707 06:16:45.445784 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.445816 kubelet[3113]: E0707 06:16:45.445790 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.445984 kubelet[3113]: E0707 06:16:45.445978 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.446048 kubelet[3113]: W0707 06:16:45.446018 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.446048 kubelet[3113]: E0707 06:16:45.446027 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.446202 kubelet[3113]: E0707 06:16:45.446197 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.446285 kubelet[3113]: W0707 06:16:45.446235 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.446285 kubelet[3113]: E0707 06:16:45.446243 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.446430 kubelet[3113]: E0707 06:16:45.446424 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.446496 kubelet[3113]: W0707 06:16:45.446464 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.446496 kubelet[3113]: E0707 06:16:45.446474 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.446735 kubelet[3113]: E0707 06:16:45.446729 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.446820 kubelet[3113]: W0707 06:16:45.446771 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.446820 kubelet[3113]: E0707 06:16:45.446781 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.448139 kubelet[3113]: E0707 06:16:45.448122 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.448179 kubelet[3113]: W0707 06:16:45.448141 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.448179 kubelet[3113]: E0707 06:16:45.448156 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.448290 kubelet[3113]: E0707 06:16:45.448282 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.448323 kubelet[3113]: W0707 06:16:45.448290 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.448323 kubelet[3113]: E0707 06:16:45.448298 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.448434 kubelet[3113]: E0707 06:16:45.448426 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.448460 kubelet[3113]: W0707 06:16:45.448434 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.448460 kubelet[3113]: E0707 06:16:45.448443 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.448553 kubelet[3113]: E0707 06:16:45.448546 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.448578 kubelet[3113]: W0707 06:16:45.448554 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.448578 kubelet[3113]: E0707 06:16:45.448561 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.448652 kubelet[3113]: E0707 06:16:45.448645 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.448676 kubelet[3113]: W0707 06:16:45.448653 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.448676 kubelet[3113]: E0707 06:16:45.448659 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.448750 kubelet[3113]: E0707 06:16:45.448743 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.448773 kubelet[3113]: W0707 06:16:45.448751 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.448773 kubelet[3113]: E0707 06:16:45.448757 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.448954 kubelet[3113]: E0707 06:16:45.448943 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.448984 kubelet[3113]: W0707 06:16:45.448956 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.448984 kubelet[3113]: E0707 06:16:45.448963 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.449086 kubelet[3113]: E0707 06:16:45.449079 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.449113 kubelet[3113]: W0707 06:16:45.449087 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.449113 kubelet[3113]: E0707 06:16:45.449093 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.449194 kubelet[3113]: I0707 06:16:45.449053 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d40c33b8-3587-4ee3-89bf-e1ad24c997ef-socket-dir\") pod \"csi-node-driver-pv69l\" (UID: \"d40c33b8-3587-4ee3-89bf-e1ad24c997ef\") " pod="calico-system/csi-node-driver-pv69l" Jul 7 06:16:45.449888 kubelet[3113]: E0707 06:16:45.449875 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.449934 kubelet[3113]: W0707 06:16:45.449888 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.449934 kubelet[3113]: E0707 06:16:45.449898 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.450063 kubelet[3113]: E0707 06:16:45.450054 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.450090 kubelet[3113]: W0707 06:16:45.450063 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.450090 kubelet[3113]: E0707 06:16:45.450070 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.450140 kubelet[3113]: I0707 06:16:45.450095 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d40c33b8-3587-4ee3-89bf-e1ad24c997ef-kubelet-dir\") pod \"csi-node-driver-pv69l\" (UID: \"d40c33b8-3587-4ee3-89bf-e1ad24c997ef\") " pod="calico-system/csi-node-driver-pv69l" Jul 7 06:16:45.450216 kubelet[3113]: E0707 06:16:45.450207 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.450242 kubelet[3113]: W0707 06:16:45.450217 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.450242 kubelet[3113]: E0707 06:16:45.450224 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.450285 kubelet[3113]: I0707 06:16:45.450245 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d40c33b8-3587-4ee3-89bf-e1ad24c997ef-registration-dir\") pod \"csi-node-driver-pv69l\" (UID: \"d40c33b8-3587-4ee3-89bf-e1ad24c997ef\") " pod="calico-system/csi-node-driver-pv69l" Jul 7 06:16:45.450376 kubelet[3113]: E0707 06:16:45.450367 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.450402 kubelet[3113]: W0707 06:16:45.450376 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.450402 kubelet[3113]: E0707 06:16:45.450384 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.450447 kubelet[3113]: I0707 06:16:45.450407 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59qzm\" (UniqueName: \"kubernetes.io/projected/d40c33b8-3587-4ee3-89bf-e1ad24c997ef-kube-api-access-59qzm\") pod \"csi-node-driver-pv69l\" (UID: \"d40c33b8-3587-4ee3-89bf-e1ad24c997ef\") " pod="calico-system/csi-node-driver-pv69l" Jul 7 06:16:45.450533 kubelet[3113]: E0707 06:16:45.450523 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.450560 kubelet[3113]: W0707 06:16:45.450533 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.450560 kubelet[3113]: E0707 06:16:45.450540 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.450606 kubelet[3113]: I0707 06:16:45.450560 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d40c33b8-3587-4ee3-89bf-e1ad24c997ef-varrun\") pod \"csi-node-driver-pv69l\" (UID: \"d40c33b8-3587-4ee3-89bf-e1ad24c997ef\") " pod="calico-system/csi-node-driver-pv69l" Jul 7 06:16:45.450679 kubelet[3113]: E0707 06:16:45.450670 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.450706 kubelet[3113]: W0707 06:16:45.450680 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.450706 kubelet[3113]: E0707 06:16:45.450687 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.451948 kubelet[3113]: E0707 06:16:45.451920 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.452029 kubelet[3113]: W0707 06:16:45.451951 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.452029 kubelet[3113]: E0707 06:16:45.451968 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.452166 kubelet[3113]: E0707 06:16:45.452154 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.452199 kubelet[3113]: W0707 06:16:45.452166 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.452199 kubelet[3113]: E0707 06:16:45.452176 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.452964 kubelet[3113]: E0707 06:16:45.452945 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.452964 kubelet[3113]: W0707 06:16:45.452964 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.453062 kubelet[3113]: E0707 06:16:45.452978 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.453174 kubelet[3113]: E0707 06:16:45.453164 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.453206 kubelet[3113]: W0707 06:16:45.453187 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.453206 kubelet[3113]: E0707 06:16:45.453197 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.453341 kubelet[3113]: E0707 06:16:45.453333 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.453370 kubelet[3113]: W0707 06:16:45.453342 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.453370 kubelet[3113]: E0707 06:16:45.453350 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.453856 kubelet[3113]: E0707 06:16:45.453839 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.453856 kubelet[3113]: W0707 06:16:45.453854 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.453951 kubelet[3113]: E0707 06:16:45.453866 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.454017 kubelet[3113]: E0707 06:16:45.454008 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.454039 kubelet[3113]: W0707 06:16:45.454019 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.454039 kubelet[3113]: E0707 06:16:45.454028 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.478909 containerd[1734]: time="2025-07-07T06:16:45.478545701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qmlst,Uid:a2e47e2c-b10c-4075-9c68-b5919a63eac2,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:45.538239 containerd[1734]: time="2025-07-07T06:16:45.538210882Z" level=info msg="connecting to shim 4d711be28e44fde4d63c2749a3583bb2b0f156c6a4d84d18ba93f30efc7bd6d1" address="unix:///run/containerd/s/f62fb51d02b73ee060ac512813b643ca8771eec41a7c3287a8eabb1c07d5f052" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:45.553030 kubelet[3113]: E0707 06:16:45.553013 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.553122 kubelet[3113]: W0707 06:16:45.553113 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.553224 kubelet[3113]: E0707 06:16:45.553160 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.553391 kubelet[3113]: E0707 06:16:45.553383 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.553465 kubelet[3113]: W0707 06:16:45.553440 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.553465 kubelet[3113]: E0707 06:16:45.553454 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.553772 kubelet[3113]: E0707 06:16:45.553702 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.553772 kubelet[3113]: W0707 06:16:45.553710 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.553772 kubelet[3113]: E0707 06:16:45.553719 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.554035 kubelet[3113]: E0707 06:16:45.553967 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.554035 kubelet[3113]: W0707 06:16:45.553976 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.554035 kubelet[3113]: E0707 06:16:45.553985 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.554232 kubelet[3113]: E0707 06:16:45.554207 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.554232 kubelet[3113]: W0707 06:16:45.554214 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.554232 kubelet[3113]: E0707 06:16:45.554223 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.554502 kubelet[3113]: E0707 06:16:45.554457 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.554502 kubelet[3113]: W0707 06:16:45.554464 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.554502 kubelet[3113]: E0707 06:16:45.554472 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.554738 kubelet[3113]: E0707 06:16:45.554668 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.554738 kubelet[3113]: W0707 06:16:45.554675 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.554738 kubelet[3113]: E0707 06:16:45.554682 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.554947 kubelet[3113]: E0707 06:16:45.554901 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.554947 kubelet[3113]: W0707 06:16:45.554910 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.554947 kubelet[3113]: E0707 06:16:45.554918 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.557021 kubelet[3113]: E0707 06:16:45.556939 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.557021 kubelet[3113]: W0707 06:16:45.556956 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.557021 kubelet[3113]: E0707 06:16:45.556970 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.557333 kubelet[3113]: E0707 06:16:45.557271 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.557333 kubelet[3113]: W0707 06:16:45.557281 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.557333 kubelet[3113]: E0707 06:16:45.557292 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.557581 kubelet[3113]: E0707 06:16:45.557528 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.557581 kubelet[3113]: W0707 06:16:45.557536 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.557581 kubelet[3113]: E0707 06:16:45.557545 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.557821 kubelet[3113]: E0707 06:16:45.557754 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.557821 kubelet[3113]: W0707 06:16:45.557762 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.557821 kubelet[3113]: E0707 06:16:45.557778 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.559848 kubelet[3113]: E0707 06:16:45.559796 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.559998 kubelet[3113]: W0707 06:16:45.559920 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.559998 kubelet[3113]: E0707 06:16:45.559936 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.560363 kubelet[3113]: E0707 06:16:45.560249 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.560363 kubelet[3113]: W0707 06:16:45.560265 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.560363 kubelet[3113]: E0707 06:16:45.560279 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.560531 kubelet[3113]: E0707 06:16:45.560525 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.560615 kubelet[3113]: W0707 06:16:45.560566 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.560615 kubelet[3113]: E0707 06:16:45.560578 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.560818 kubelet[3113]: E0707 06:16:45.560768 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.560818 kubelet[3113]: W0707 06:16:45.560777 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.560818 kubelet[3113]: E0707 06:16:45.560786 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.561086 kubelet[3113]: E0707 06:16:45.561028 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.561086 kubelet[3113]: W0707 06:16:45.561036 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.561086 kubelet[3113]: E0707 06:16:45.561043 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.561268 kubelet[3113]: E0707 06:16:45.561230 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.561268 kubelet[3113]: W0707 06:16:45.561236 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.561268 kubelet[3113]: E0707 06:16:45.561244 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.561499 kubelet[3113]: E0707 06:16:45.561408 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.561499 kubelet[3113]: W0707 06:16:45.561416 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.561499 kubelet[3113]: E0707 06:16:45.561422 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.561640 kubelet[3113]: E0707 06:16:45.561634 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.561696 kubelet[3113]: W0707 06:16:45.561673 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.561696 kubelet[3113]: E0707 06:16:45.561683 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.561922 kubelet[3113]: E0707 06:16:45.561916 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.562002 kubelet[3113]: W0707 06:16:45.561960 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.562002 kubelet[3113]: E0707 06:16:45.561970 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.562619 kubelet[3113]: E0707 06:16:45.562606 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.562769 kubelet[3113]: W0707 06:16:45.562689 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.562769 kubelet[3113]: E0707 06:16:45.562702 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.562960 kubelet[3113]: E0707 06:16:45.562953 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.563002 kubelet[3113]: W0707 06:16:45.562996 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.563065 kubelet[3113]: E0707 06:16:45.563034 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.563186 kubelet[3113]: E0707 06:16:45.563179 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.563654 kubelet[3113]: W0707 06:16:45.563221 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.563654 kubelet[3113]: E0707 06:16:45.563230 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.563844 kubelet[3113]: E0707 06:16:45.563834 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.563894 kubelet[3113]: W0707 06:16:45.563885 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.563936 kubelet[3113]: E0707 06:16:45.563928 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.575139 systemd[1]: Started cri-containerd-4d711be28e44fde4d63c2749a3583bb2b0f156c6a4d84d18ba93f30efc7bd6d1.scope - libcontainer container 4d711be28e44fde4d63c2749a3583bb2b0f156c6a4d84d18ba93f30efc7bd6d1. Jul 7 06:16:45.582219 kubelet[3113]: E0707 06:16:45.582171 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:45.582219 kubelet[3113]: W0707 06:16:45.582183 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:45.582219 kubelet[3113]: E0707 06:16:45.582194 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:45.620674 containerd[1734]: time="2025-07-07T06:16:45.620627697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qmlst,Uid:a2e47e2c-b10c-4075-9c68-b5919a63eac2,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d711be28e44fde4d63c2749a3583bb2b0f156c6a4d84d18ba93f30efc7bd6d1\"" Jul 7 06:16:46.574920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2999992496.mount: Deactivated successfully. Jul 7 06:16:47.050313 containerd[1734]: time="2025-07-07T06:16:47.050266079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:47.053120 containerd[1734]: time="2025-07-07T06:16:47.053060338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 06:16:47.056925 containerd[1734]: time="2025-07-07T06:16:47.056885781Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:47.060884 containerd[1734]: time="2025-07-07T06:16:47.060850849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:47.061456 containerd[1734]: time="2025-07-07T06:16:47.061180107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.754576986s" Jul 7 06:16:47.061456 containerd[1734]: time="2025-07-07T06:16:47.061210420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 06:16:47.062823 containerd[1734]: time="2025-07-07T06:16:47.062707408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:16:47.082711 containerd[1734]: time="2025-07-07T06:16:47.082687242Z" level=info msg="CreateContainer within sandbox \"1843a93abdd9e6ba7c828be26c1cc4582df79b2564b0772d2707942f7091fc31\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:16:47.104828 containerd[1734]: time="2025-07-07T06:16:47.103943548Z" level=info msg="Container c472d75189669d7b3e182c1142e5811876ddb7c6e55913ab14aa6def6d203949: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:47.128923 containerd[1734]: time="2025-07-07T06:16:47.128899166Z" level=info msg="CreateContainer within sandbox \"1843a93abdd9e6ba7c828be26c1cc4582df79b2564b0772d2707942f7091fc31\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c472d75189669d7b3e182c1142e5811876ddb7c6e55913ab14aa6def6d203949\"" Jul 7 06:16:47.129369 containerd[1734]: time="2025-07-07T06:16:47.129240855Z" level=info msg="StartContainer for \"c472d75189669d7b3e182c1142e5811876ddb7c6e55913ab14aa6def6d203949\"" Jul 7 06:16:47.130708 containerd[1734]: time="2025-07-07T06:16:47.130668697Z" level=info msg="connecting to shim c472d75189669d7b3e182c1142e5811876ddb7c6e55913ab14aa6def6d203949" address="unix:///run/containerd/s/e61e2d3883b605b773705c6316d620030a48aec3a5d46f86c8c97bcdf8f9ec9c" protocol=ttrpc version=3 Jul 7 06:16:47.148980 systemd[1]: Started cri-containerd-c472d75189669d7b3e182c1142e5811876ddb7c6e55913ab14aa6def6d203949.scope - libcontainer container c472d75189669d7b3e182c1142e5811876ddb7c6e55913ab14aa6def6d203949. Jul 7 06:16:47.195988 containerd[1734]: time="2025-07-07T06:16:47.195919960Z" level=info msg="StartContainer for \"c472d75189669d7b3e182c1142e5811876ddb7c6e55913ab14aa6def6d203949\" returns successfully" Jul 7 06:16:47.633941 kubelet[3113]: E0707 06:16:47.633888 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pv69l" podUID="d40c33b8-3587-4ee3-89bf-e1ad24c997ef" Jul 7 06:16:47.743049 kubelet[3113]: I0707 06:16:47.742528 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58c6c4b556-npp2q" podStartSLOduration=1.986378709 podStartE2EDuration="3.7425082s" podCreationTimestamp="2025-07-07 06:16:44 +0000 UTC" firstStartedPulling="2025-07-07 06:16:45.306286526 +0000 UTC m=+18.762732237" lastFinishedPulling="2025-07-07 06:16:47.062416006 +0000 UTC m=+20.518861728" observedRunningTime="2025-07-07 06:16:47.742049989 +0000 UTC m=+21.198495707" watchObservedRunningTime="2025-07-07 06:16:47.7425082 +0000 UTC m=+21.198953917" Jul 7 06:16:47.764553 kubelet[3113]: E0707 06:16:47.764509 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.764553 kubelet[3113]: W0707 06:16:47.764547 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.764771 kubelet[3113]: E0707 06:16:47.764570 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.764771 kubelet[3113]: E0707 06:16:47.764687 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.764771 kubelet[3113]: W0707 06:16:47.764694 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.764771 kubelet[3113]: E0707 06:16:47.764701 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.764933 kubelet[3113]: E0707 06:16:47.764790 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.764933 kubelet[3113]: W0707 06:16:47.764795 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.764933 kubelet[3113]: E0707 06:16:47.764813 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.765030 kubelet[3113]: E0707 06:16:47.764954 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.765030 kubelet[3113]: W0707 06:16:47.764959 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.765030 kubelet[3113]: E0707 06:16:47.764966 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.765141 kubelet[3113]: E0707 06:16:47.765059 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.765141 kubelet[3113]: W0707 06:16:47.765064 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.765141 kubelet[3113]: E0707 06:16:47.765070 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.765256 kubelet[3113]: E0707 06:16:47.765152 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.765256 kubelet[3113]: W0707 06:16:47.765157 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.765256 kubelet[3113]: E0707 06:16:47.765163 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.765256 kubelet[3113]: E0707 06:16:47.765244 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.765256 kubelet[3113]: W0707 06:16:47.765248 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.765256 kubelet[3113]: E0707 06:16:47.765253 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.765579 kubelet[3113]: E0707 06:16:47.765334 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.765579 kubelet[3113]: W0707 06:16:47.765339 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.765579 kubelet[3113]: E0707 06:16:47.765345 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.765825 kubelet[3113]: E0707 06:16:47.765769 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.765825 kubelet[3113]: W0707 06:16:47.765789 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.765911 kubelet[3113]: E0707 06:16:47.765900 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.766151 kubelet[3113]: E0707 06:16:47.766137 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.766192 kubelet[3113]: W0707 06:16:47.766151 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.766192 kubelet[3113]: E0707 06:16:47.766164 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.766376 kubelet[3113]: E0707 06:16:47.766280 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.766376 kubelet[3113]: W0707 06:16:47.766286 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.766376 kubelet[3113]: E0707 06:16:47.766294 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.766594 kubelet[3113]: E0707 06:16:47.766413 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.766594 kubelet[3113]: W0707 06:16:47.766419 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.766594 kubelet[3113]: E0707 06:16:47.766426 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.766594 kubelet[3113]: E0707 06:16:47.766510 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.766594 kubelet[3113]: W0707 06:16:47.766515 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.766594 kubelet[3113]: E0707 06:16:47.766521 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.766928 kubelet[3113]: E0707 06:16:47.766606 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.766928 kubelet[3113]: W0707 06:16:47.766611 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.766928 kubelet[3113]: E0707 06:16:47.766618 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.766928 kubelet[3113]: E0707 06:16:47.766699 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.766928 kubelet[3113]: W0707 06:16:47.766704 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.766928 kubelet[3113]: E0707 06:16:47.766710 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.768012 kubelet[3113]: E0707 06:16:47.767983 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.768264 kubelet[3113]: W0707 06:16:47.768097 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.768264 kubelet[3113]: E0707 06:16:47.768126 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.768478 kubelet[3113]: E0707 06:16:47.768452 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.768478 kubelet[3113]: W0707 06:16:47.768477 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.768539 kubelet[3113]: E0707 06:16:47.768491 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.768664 kubelet[3113]: E0707 06:16:47.768633 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.768664 kubelet[3113]: W0707 06:16:47.768657 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.768725 kubelet[3113]: E0707 06:16:47.768672 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.768934 kubelet[3113]: E0707 06:16:47.768910 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.768934 kubelet[3113]: W0707 06:16:47.768932 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.769009 kubelet[3113]: E0707 06:16:47.768940 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.769051 kubelet[3113]: E0707 06:16:47.769044 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.769051 kubelet[3113]: W0707 06:16:47.769049 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.769107 kubelet[3113]: E0707 06:16:47.769056 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.769159 kubelet[3113]: E0707 06:16:47.769153 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.769159 kubelet[3113]: W0707 06:16:47.769157 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.769212 kubelet[3113]: E0707 06:16:47.769164 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.769341 kubelet[3113]: E0707 06:16:47.769318 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.769341 kubelet[3113]: W0707 06:16:47.769340 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.769387 kubelet[3113]: E0707 06:16:47.769347 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.769603 kubelet[3113]: E0707 06:16:47.769585 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.769603 kubelet[3113]: W0707 06:16:47.769600 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.769663 kubelet[3113]: E0707 06:16:47.769608 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.769739 kubelet[3113]: E0707 06:16:47.769727 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.769739 kubelet[3113]: W0707 06:16:47.769737 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.769787 kubelet[3113]: E0707 06:16:47.769744 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.769884 kubelet[3113]: E0707 06:16:47.769866 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.769884 kubelet[3113]: W0707 06:16:47.769882 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.769957 kubelet[3113]: E0707 06:16:47.769889 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.769994 kubelet[3113]: E0707 06:16:47.769986 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.770036 kubelet[3113]: W0707 06:16:47.769993 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.770036 kubelet[3113]: E0707 06:16:47.769999 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.770087 kubelet[3113]: E0707 06:16:47.770078 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.770087 kubelet[3113]: W0707 06:16:47.770083 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.770140 kubelet[3113]: E0707 06:16:47.770090 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.770232 kubelet[3113]: E0707 06:16:47.770223 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.770232 kubelet[3113]: W0707 06:16:47.770230 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.770299 kubelet[3113]: E0707 06:16:47.770237 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.770547 kubelet[3113]: E0707 06:16:47.770479 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.770547 kubelet[3113]: W0707 06:16:47.770495 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.770547 kubelet[3113]: E0707 06:16:47.770505 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.770666 kubelet[3113]: E0707 06:16:47.770639 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.770666 kubelet[3113]: W0707 06:16:47.770660 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.770713 kubelet[3113]: E0707 06:16:47.770668 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.770839 kubelet[3113]: E0707 06:16:47.770830 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.770839 kubelet[3113]: W0707 06:16:47.770838 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.770889 kubelet[3113]: E0707 06:16:47.770844 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.771025 kubelet[3113]: E0707 06:16:47.771015 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.771025 kubelet[3113]: W0707 06:16:47.771022 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.771077 kubelet[3113]: E0707 06:16:47.771030 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:47.771207 kubelet[3113]: E0707 06:16:47.771197 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:47.771207 kubelet[3113]: W0707 06:16:47.771204 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:47.771250 kubelet[3113]: E0707 06:16:47.771211 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.732150 kubelet[3113]: I0707 06:16:48.732123 3113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:16:48.770172 containerd[1734]: time="2025-07-07T06:16:48.770127981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:48.773060 kubelet[3113]: E0707 06:16:48.773038 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.773147 kubelet[3113]: W0707 06:16:48.773057 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.773147 kubelet[3113]: E0707 06:16:48.773088 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.773247 kubelet[3113]: E0707 06:16:48.773235 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.773247 kubelet[3113]: W0707 06:16:48.773245 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.773310 kubelet[3113]: E0707 06:16:48.773253 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.773396 kubelet[3113]: E0707 06:16:48.773376 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.773396 kubelet[3113]: W0707 06:16:48.773386 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.773481 kubelet[3113]: E0707 06:16:48.773397 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.773553 kubelet[3113]: E0707 06:16:48.773536 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.773553 kubelet[3113]: W0707 06:16:48.773552 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.773607 kubelet[3113]: E0707 06:16:48.773560 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.773693 kubelet[3113]: E0707 06:16:48.773681 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.773693 kubelet[3113]: W0707 06:16:48.773690 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.773770 kubelet[3113]: E0707 06:16:48.773698 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.773800 kubelet[3113]: E0707 06:16:48.773795 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.773849 kubelet[3113]: W0707 06:16:48.773801 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.773849 kubelet[3113]: E0707 06:16:48.773822 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.773933 kubelet[3113]: E0707 06:16:48.773921 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.773933 kubelet[3113]: W0707 06:16:48.773930 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.774007 kubelet[3113]: E0707 06:16:48.773938 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.774042 kubelet[3113]: E0707 06:16:48.774031 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.774042 kubelet[3113]: W0707 06:16:48.774035 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.774107 kubelet[3113]: E0707 06:16:48.774041 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.774148 kubelet[3113]: E0707 06:16:48.774141 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.774148 kubelet[3113]: W0707 06:16:48.774147 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.774570 kubelet[3113]: E0707 06:16:48.774154 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.774570 kubelet[3113]: E0707 06:16:48.774235 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.774570 kubelet[3113]: W0707 06:16:48.774239 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.774570 kubelet[3113]: E0707 06:16:48.774246 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.774570 kubelet[3113]: E0707 06:16:48.774360 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.774570 kubelet[3113]: W0707 06:16:48.774365 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.774570 kubelet[3113]: E0707 06:16:48.774372 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.774570 kubelet[3113]: E0707 06:16:48.774463 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.774570 kubelet[3113]: W0707 06:16:48.774468 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.774570 kubelet[3113]: E0707 06:16:48.774475 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.774955 kubelet[3113]: E0707 06:16:48.774577 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.774955 kubelet[3113]: W0707 06:16:48.774583 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.774955 kubelet[3113]: E0707 06:16:48.774590 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.774955 kubelet[3113]: E0707 06:16:48.774720 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.774955 kubelet[3113]: W0707 06:16:48.774725 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.774955 kubelet[3113]: E0707 06:16:48.774731 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.774955 kubelet[3113]: E0707 06:16:48.774833 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.774955 kubelet[3113]: W0707 06:16:48.774840 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.774955 kubelet[3113]: E0707 06:16:48.774846 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.775269 containerd[1734]: time="2025-07-07T06:16:48.774736781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 06:16:48.775320 kubelet[3113]: E0707 06:16:48.775008 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.775320 kubelet[3113]: W0707 06:16:48.775014 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.775320 kubelet[3113]: E0707 06:16:48.775021 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.775320 kubelet[3113]: E0707 06:16:48.775155 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.775320 kubelet[3113]: W0707 06:16:48.775161 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.775320 kubelet[3113]: E0707 06:16:48.775167 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.775320 kubelet[3113]: E0707 06:16:48.775283 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.775320 kubelet[3113]: W0707 06:16:48.775288 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.775320 kubelet[3113]: E0707 06:16:48.775295 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.775610 kubelet[3113]: E0707 06:16:48.775433 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.775610 kubelet[3113]: W0707 06:16:48.775439 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.775610 kubelet[3113]: E0707 06:16:48.775446 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.775610 kubelet[3113]: E0707 06:16:48.775580 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.775610 kubelet[3113]: W0707 06:16:48.775585 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.775610 kubelet[3113]: E0707 06:16:48.775591 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.775900 kubelet[3113]: E0707 06:16:48.775692 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.775900 kubelet[3113]: W0707 06:16:48.775698 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.775900 kubelet[3113]: E0707 06:16:48.775704 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.775900 kubelet[3113]: E0707 06:16:48.775846 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.775900 kubelet[3113]: W0707 06:16:48.775852 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.775900 kubelet[3113]: E0707 06:16:48.775859 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.776204 kubelet[3113]: E0707 06:16:48.776109 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.776204 kubelet[3113]: W0707 06:16:48.776169 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.776204 kubelet[3113]: E0707 06:16:48.776180 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.776347 kubelet[3113]: E0707 06:16:48.776335 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.776347 kubelet[3113]: W0707 06:16:48.776345 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.776419 kubelet[3113]: E0707 06:16:48.776353 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.776486 kubelet[3113]: E0707 06:16:48.776472 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.776486 kubelet[3113]: W0707 06:16:48.776482 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.776566 kubelet[3113]: E0707 06:16:48.776490 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.776644 kubelet[3113]: E0707 06:16:48.776634 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.776644 kubelet[3113]: W0707 06:16:48.776642 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.776716 kubelet[3113]: E0707 06:16:48.776649 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.776783 kubelet[3113]: E0707 06:16:48.776771 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.776783 kubelet[3113]: W0707 06:16:48.776779 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.776892 kubelet[3113]: E0707 06:16:48.776786 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.776962 kubelet[3113]: E0707 06:16:48.776950 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.776962 kubelet[3113]: W0707 06:16:48.776960 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.777052 kubelet[3113]: E0707 06:16:48.776968 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.777335 kubelet[3113]: E0707 06:16:48.777252 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.777335 kubelet[3113]: W0707 06:16:48.777270 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.777335 kubelet[3113]: E0707 06:16:48.777282 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.777443 kubelet[3113]: E0707 06:16:48.777423 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.777443 kubelet[3113]: W0707 06:16:48.777429 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.777443 kubelet[3113]: E0707 06:16:48.777437 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.777710 kubelet[3113]: E0707 06:16:48.777573 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.777710 kubelet[3113]: W0707 06:16:48.777580 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.777710 kubelet[3113]: E0707 06:16:48.777587 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.777710 kubelet[3113]: E0707 06:16:48.777698 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.777710 kubelet[3113]: W0707 06:16:48.777704 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.778013 kubelet[3113]: E0707 06:16:48.777711 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.778050 containerd[1734]: time="2025-07-07T06:16:48.777904508Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:48.778106 kubelet[3113]: E0707 06:16:48.778047 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:48.778106 kubelet[3113]: W0707 06:16:48.778053 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:48.778106 kubelet[3113]: E0707 06:16:48.778062 3113 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:48.782035 containerd[1734]: time="2025-07-07T06:16:48.781989084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:48.782675 containerd[1734]: time="2025-07-07T06:16:48.782394333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.719511562s" Jul 7 06:16:48.782675 containerd[1734]: time="2025-07-07T06:16:48.782423781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 06:16:48.792467 containerd[1734]: time="2025-07-07T06:16:48.792442887Z" level=info msg="CreateContainer within sandbox \"4d711be28e44fde4d63c2749a3583bb2b0f156c6a4d84d18ba93f30efc7bd6d1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:16:48.813835 containerd[1734]: time="2025-07-07T06:16:48.811842973Z" level=info msg="Container 6a298c566194ed339ac3a61a92866bcaa5439810a9bc2c49236aace88dddbdb2: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:48.833986 containerd[1734]: time="2025-07-07T06:16:48.833960608Z" level=info msg="CreateContainer within sandbox \"4d711be28e44fde4d63c2749a3583bb2b0f156c6a4d84d18ba93f30efc7bd6d1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6a298c566194ed339ac3a61a92866bcaa5439810a9bc2c49236aace88dddbdb2\"" Jul 7 06:16:48.834554 containerd[1734]: time="2025-07-07T06:16:48.834346921Z" level=info msg="StartContainer for \"6a298c566194ed339ac3a61a92866bcaa5439810a9bc2c49236aace88dddbdb2\"" Jul 7 06:16:48.835814 containerd[1734]: time="2025-07-07T06:16:48.835766266Z" level=info msg="connecting to shim 6a298c566194ed339ac3a61a92866bcaa5439810a9bc2c49236aace88dddbdb2" address="unix:///run/containerd/s/f62fb51d02b73ee060ac512813b643ca8771eec41a7c3287a8eabb1c07d5f052" protocol=ttrpc version=3 Jul 7 06:16:48.855015 systemd[1]: Started cri-containerd-6a298c566194ed339ac3a61a92866bcaa5439810a9bc2c49236aace88dddbdb2.scope - libcontainer container 6a298c566194ed339ac3a61a92866bcaa5439810a9bc2c49236aace88dddbdb2. Jul 7 06:16:48.890106 containerd[1734]: time="2025-07-07T06:16:48.890036199Z" level=info msg="StartContainer for \"6a298c566194ed339ac3a61a92866bcaa5439810a9bc2c49236aace88dddbdb2\" returns successfully" Jul 7 06:16:48.893248 systemd[1]: cri-containerd-6a298c566194ed339ac3a61a92866bcaa5439810a9bc2c49236aace88dddbdb2.scope: Deactivated successfully. Jul 7 06:16:48.896519 containerd[1734]: time="2025-07-07T06:16:48.896466378Z" level=info msg="received exit event container_id:\"6a298c566194ed339ac3a61a92866bcaa5439810a9bc2c49236aace88dddbdb2\" id:\"6a298c566194ed339ac3a61a92866bcaa5439810a9bc2c49236aace88dddbdb2\" pid:3870 exited_at:{seconds:1751869008 nanos:896114024}" Jul 7 06:16:48.896519 containerd[1734]: time="2025-07-07T06:16:48.896491865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a298c566194ed339ac3a61a92866bcaa5439810a9bc2c49236aace88dddbdb2\" id:\"6a298c566194ed339ac3a61a92866bcaa5439810a9bc2c49236aace88dddbdb2\" pid:3870 exited_at:{seconds:1751869008 nanos:896114024}" Jul 7 06:16:48.912898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a298c566194ed339ac3a61a92866bcaa5439810a9bc2c49236aace88dddbdb2-rootfs.mount: Deactivated successfully. Jul 7 06:16:49.633272 kubelet[3113]: E0707 06:16:49.633212 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pv69l" podUID="d40c33b8-3587-4ee3-89bf-e1ad24c997ef" Jul 7 06:16:51.633362 kubelet[3113]: E0707 06:16:51.633308 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pv69l" podUID="d40c33b8-3587-4ee3-89bf-e1ad24c997ef" Jul 7 06:16:51.742400 containerd[1734]: time="2025-07-07T06:16:51.742357721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:16:53.633840 kubelet[3113]: E0707 06:16:53.633727 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pv69l" podUID="d40c33b8-3587-4ee3-89bf-e1ad24c997ef" Jul 7 06:16:54.143840 containerd[1734]: time="2025-07-07T06:16:54.143780425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:54.150756 containerd[1734]: time="2025-07-07T06:16:54.150718675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 06:16:54.155599 containerd[1734]: time="2025-07-07T06:16:54.155553038Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:54.161437 containerd[1734]: time="2025-07-07T06:16:54.161377828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:54.161924 containerd[1734]: time="2025-07-07T06:16:54.161797994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.419392075s" Jul 7 06:16:54.161924 containerd[1734]: time="2025-07-07T06:16:54.161853581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 06:16:54.170663 containerd[1734]: time="2025-07-07T06:16:54.170630702Z" level=info msg="CreateContainer within sandbox \"4d711be28e44fde4d63c2749a3583bb2b0f156c6a4d84d18ba93f30efc7bd6d1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:16:54.194394 containerd[1734]: time="2025-07-07T06:16:54.194363432Z" level=info msg="Container db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:54.200404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249081056.mount: Deactivated successfully. Jul 7 06:16:54.225565 containerd[1734]: time="2025-07-07T06:16:54.225538192Z" level=info msg="CreateContainer within sandbox \"4d711be28e44fde4d63c2749a3583bb2b0f156c6a4d84d18ba93f30efc7bd6d1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a\"" Jul 7 06:16:54.225967 containerd[1734]: time="2025-07-07T06:16:54.225874964Z" level=info msg="StartContainer for \"db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a\"" Jul 7 06:16:54.227592 containerd[1734]: time="2025-07-07T06:16:54.227553671Z" level=info msg="connecting to shim db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a" address="unix:///run/containerd/s/f62fb51d02b73ee060ac512813b643ca8771eec41a7c3287a8eabb1c07d5f052" protocol=ttrpc version=3 Jul 7 06:16:54.246980 systemd[1]: Started cri-containerd-db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a.scope - libcontainer container db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a. Jul 7 06:16:54.282065 containerd[1734]: time="2025-07-07T06:16:54.282024877Z" level=info msg="StartContainer for \"db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a\" returns successfully" Jul 7 06:16:55.481532 containerd[1734]: time="2025-07-07T06:16:55.481467399Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:16:55.483286 systemd[1]: cri-containerd-db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a.scope: Deactivated successfully. Jul 7 06:16:55.483625 systemd[1]: cri-containerd-db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a.scope: Consumed 444ms CPU time, 190.7M memory peak, 171.2M written to disk. Jul 7 06:16:55.486303 containerd[1734]: time="2025-07-07T06:16:55.486205020Z" level=info msg="received exit event container_id:\"db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a\" id:\"db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a\" pid:3930 exited_at:{seconds:1751869015 nanos:485888113}" Jul 7 06:16:55.486303 containerd[1734]: time="2025-07-07T06:16:55.486266398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a\" id:\"db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a\" pid:3930 exited_at:{seconds:1751869015 nanos:485888113}" Jul 7 06:16:55.506864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db78a4e26a5a5e6f38da9edbe480bfbcce2a06dbc312c32b5789d1f51767036a-rootfs.mount: Deactivated successfully. Jul 7 06:16:55.523185 kubelet[3113]: I0707 06:16:55.523132 3113 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:16:55.768592 systemd[1]: Created slice kubepods-burstable-pod22773889_0774_4541_902a_94cea1187ad6.slice - libcontainer container kubepods-burstable-pod22773889_0774_4541_902a_94cea1187ad6.slice. Jul 7 06:16:55.922364 kubelet[3113]: I0707 06:16:55.922309 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22773889-0774-4541-902a-94cea1187ad6-config-volume\") pod \"coredns-674b8bbfcf-hf657\" (UID: \"22773889-0774-4541-902a-94cea1187ad6\") " pod="kube-system/coredns-674b8bbfcf-hf657" Jul 7 06:16:55.922364 kubelet[3113]: I0707 06:16:55.922363 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzvbx\" (UniqueName: \"kubernetes.io/projected/22773889-0774-4541-902a-94cea1187ad6-kube-api-access-mzvbx\") pod \"coredns-674b8bbfcf-hf657\" (UID: \"22773889-0774-4541-902a-94cea1187ad6\") " pod="kube-system/coredns-674b8bbfcf-hf657" Jul 7 06:16:56.064353 systemd[1]: Created slice kubepods-burstable-pod3b1aa788_9a19_4746_9123_ae8c0cbae91b.slice - libcontainer container kubepods-burstable-pod3b1aa788_9a19_4746_9123_ae8c0cbae91b.slice. Jul 7 06:16:56.075380 containerd[1734]: time="2025-07-07T06:16:56.075337044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hf657,Uid:22773889-0774-4541-902a-94cea1187ad6,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:56.123106 kubelet[3113]: I0707 06:16:56.123051 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b1aa788-9a19-4746-9123-ae8c0cbae91b-config-volume\") pod \"coredns-674b8bbfcf-nw88c\" (UID: \"3b1aa788-9a19-4746-9123-ae8c0cbae91b\") " pod="kube-system/coredns-674b8bbfcf-nw88c" Jul 7 06:16:56.123221 kubelet[3113]: I0707 06:16:56.123109 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7f5c\" (UniqueName: \"kubernetes.io/projected/3b1aa788-9a19-4746-9123-ae8c0cbae91b-kube-api-access-z7f5c\") pod \"coredns-674b8bbfcf-nw88c\" (UID: \"3b1aa788-9a19-4746-9123-ae8c0cbae91b\") " pod="kube-system/coredns-674b8bbfcf-nw88c" Jul 7 06:16:56.308134 systemd[1]: Created slice kubepods-besteffort-pod77f216df_f1a6_4265_a0aa_6eed0541752d.slice - libcontainer container kubepods-besteffort-pod77f216df_f1a6_4265_a0aa_6eed0541752d.slice. Jul 7 06:16:56.314372 systemd[1]: Created slice kubepods-besteffort-podd40c33b8_3587_4ee3_89bf_e1ad24c997ef.slice - libcontainer container kubepods-besteffort-podd40c33b8_3587_4ee3_89bf_e1ad24c997ef.slice. Jul 7 06:16:56.316872 containerd[1734]: time="2025-07-07T06:16:56.316649086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pv69l,Uid:d40c33b8-3587-4ee3-89bf-e1ad24c997ef,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:56.325075 kubelet[3113]: I0707 06:16:56.325039 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9fcz\" (UniqueName: \"kubernetes.io/projected/77f216df-f1a6-4265-a0aa-6eed0541752d-kube-api-access-j9fcz\") pod \"whisker-777c78c678-lb6k9\" (UID: \"77f216df-f1a6-4265-a0aa-6eed0541752d\") " pod="calico-system/whisker-777c78c678-lb6k9" Jul 7 06:16:56.325174 kubelet[3113]: I0707 06:16:56.325100 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77f216df-f1a6-4265-a0aa-6eed0541752d-whisker-ca-bundle\") pod \"whisker-777c78c678-lb6k9\" (UID: \"77f216df-f1a6-4265-a0aa-6eed0541752d\") " pod="calico-system/whisker-777c78c678-lb6k9" Jul 7 06:16:56.325174 kubelet[3113]: I0707 06:16:56.325132 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77f216df-f1a6-4265-a0aa-6eed0541752d-whisker-backend-key-pair\") pod \"whisker-777c78c678-lb6k9\" (UID: \"77f216df-f1a6-4265-a0aa-6eed0541752d\") " pod="calico-system/whisker-777c78c678-lb6k9" Jul 7 06:16:56.370066 containerd[1734]: time="2025-07-07T06:16:56.369824788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nw88c,Uid:3b1aa788-9a19-4746-9123-ae8c0cbae91b,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:56.421879 systemd[1]: Created slice kubepods-besteffort-pod77570737_2af5_46d1_a661_cdebb1903a96.slice - libcontainer container kubepods-besteffort-pod77570737_2af5_46d1_a661_cdebb1903a96.slice. Jul 7 06:16:56.425628 kubelet[3113]: I0707 06:16:56.425317 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/388c6467-152d-4b03-8422-51ecbced9736-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-xjf89\" (UID: \"388c6467-152d-4b03-8422-51ecbced9736\") " pod="calico-system/goldmane-768f4c5c69-xjf89" Jul 7 06:16:56.428878 systemd[1]: Created slice kubepods-besteffort-pod388c6467_152d_4b03_8422_51ecbced9736.slice - libcontainer container kubepods-besteffort-pod388c6467_152d_4b03_8422_51ecbced9736.slice. Jul 7 06:16:56.432414 kubelet[3113]: I0707 06:16:56.432146 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77570737-2af5-46d1-a661-cdebb1903a96-tigera-ca-bundle\") pod \"calico-kube-controllers-565cbf9989-z97xc\" (UID: \"77570737-2af5-46d1-a661-cdebb1903a96\") " pod="calico-system/calico-kube-controllers-565cbf9989-z97xc" Jul 7 06:16:56.432605 kubelet[3113]: I0707 06:16:56.432560 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gls2b\" (UniqueName: \"kubernetes.io/projected/77570737-2af5-46d1-a661-cdebb1903a96-kube-api-access-gls2b\") pod \"calico-kube-controllers-565cbf9989-z97xc\" (UID: \"77570737-2af5-46d1-a661-cdebb1903a96\") " pod="calico-system/calico-kube-controllers-565cbf9989-z97xc" Jul 7 06:16:56.434794 kubelet[3113]: I0707 06:16:56.434759 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/388c6467-152d-4b03-8422-51ecbced9736-config\") pod \"goldmane-768f4c5c69-xjf89\" (UID: \"388c6467-152d-4b03-8422-51ecbced9736\") " pod="calico-system/goldmane-768f4c5c69-xjf89" Jul 7 06:16:56.437497 kubelet[3113]: I0707 06:16:56.437383 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp8wv\" (UniqueName: \"kubernetes.io/projected/388c6467-152d-4b03-8422-51ecbced9736-kube-api-access-dp8wv\") pod \"goldmane-768f4c5c69-xjf89\" (UID: \"388c6467-152d-4b03-8422-51ecbced9736\") " pod="calico-system/goldmane-768f4c5c69-xjf89" Jul 7 06:16:56.441357 kubelet[3113]: I0707 06:16:56.439903 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/388c6467-152d-4b03-8422-51ecbced9736-goldmane-key-pair\") pod \"goldmane-768f4c5c69-xjf89\" (UID: \"388c6467-152d-4b03-8422-51ecbced9736\") " pod="calico-system/goldmane-768f4c5c69-xjf89" Jul 7 06:16:56.447638 systemd[1]: Created slice kubepods-besteffort-poda747d663_7dc8_4f34_ad5a_4c321e7b0a28.slice - libcontainer container kubepods-besteffort-poda747d663_7dc8_4f34_ad5a_4c321e7b0a28.slice. Jul 7 06:16:56.462116 systemd[1]: Created slice kubepods-besteffort-pod4fea7350_6dea_4f40_b757_4ac4780b3628.slice - libcontainer container kubepods-besteffort-pod4fea7350_6dea_4f40_b757_4ac4780b3628.slice. Jul 7 06:16:56.485092 containerd[1734]: time="2025-07-07T06:16:56.485058074Z" level=error msg="Failed to destroy network for sandbox \"6f843aa131fb098603eaac2072a52fd2e7b718fadcddc05a5aeda7432088c03d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.492576 containerd[1734]: time="2025-07-07T06:16:56.489957908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hf657,Uid:22773889-0774-4541-902a-94cea1187ad6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f843aa131fb098603eaac2072a52fd2e7b718fadcddc05a5aeda7432088c03d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.492722 kubelet[3113]: E0707 06:16:56.490335 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f843aa131fb098603eaac2072a52fd2e7b718fadcddc05a5aeda7432088c03d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.492722 kubelet[3113]: E0707 06:16:56.490703 3113 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f843aa131fb098603eaac2072a52fd2e7b718fadcddc05a5aeda7432088c03d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hf657" Jul 7 06:16:56.492722 kubelet[3113]: E0707 06:16:56.490732 3113 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f843aa131fb098603eaac2072a52fd2e7b718fadcddc05a5aeda7432088c03d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hf657" Jul 7 06:16:56.492848 kubelet[3113]: E0707 06:16:56.490879 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hf657_kube-system(22773889-0774-4541-902a-94cea1187ad6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hf657_kube-system(22773889-0774-4541-902a-94cea1187ad6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f843aa131fb098603eaac2072a52fd2e7b718fadcddc05a5aeda7432088c03d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hf657" podUID="22773889-0774-4541-902a-94cea1187ad6" Jul 7 06:16:56.511032 systemd[1]: run-netns-cni\x2d0d54403b\x2d2a46\x2d010d\x2de089\x2d2b9dfc1ebd45.mount: Deactivated successfully. Jul 7 06:16:56.536145 containerd[1734]: time="2025-07-07T06:16:56.536112817Z" level=error msg="Failed to destroy network for sandbox \"9525b8318f7097c4971c04b07dab5d39dcad95739f64d3d80ada09526ae073b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.537779 systemd[1]: run-netns-cni\x2d1d158c4f\x2d407a\x2d59dd\x2ded4d\x2d6fe4507fc71b.mount: Deactivated successfully. Jul 7 06:16:56.540397 kubelet[3113]: I0707 06:16:56.540366 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a747d663-7dc8-4f34-ad5a-4c321e7b0a28-calico-apiserver-certs\") pod \"calico-apiserver-6676f4b4fb-62vxk\" (UID: \"a747d663-7dc8-4f34-ad5a-4c321e7b0a28\") " pod="calico-apiserver/calico-apiserver-6676f4b4fb-62vxk" Jul 7 06:16:56.540769 kubelet[3113]: I0707 06:16:56.540751 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr2tj\" (UniqueName: \"kubernetes.io/projected/4fea7350-6dea-4f40-b757-4ac4780b3628-kube-api-access-hr2tj\") pod \"calico-apiserver-6676f4b4fb-w9c8j\" (UID: \"4fea7350-6dea-4f40-b757-4ac4780b3628\") " pod="calico-apiserver/calico-apiserver-6676f4b4fb-w9c8j" Jul 7 06:16:56.540968 kubelet[3113]: I0707 06:16:56.540955 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvgfr\" (UniqueName: \"kubernetes.io/projected/a747d663-7dc8-4f34-ad5a-4c321e7b0a28-kube-api-access-cvgfr\") pod \"calico-apiserver-6676f4b4fb-62vxk\" (UID: \"a747d663-7dc8-4f34-ad5a-4c321e7b0a28\") " pod="calico-apiserver/calico-apiserver-6676f4b4fb-62vxk" Jul 7 06:16:56.541162 kubelet[3113]: I0707 06:16:56.541144 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4fea7350-6dea-4f40-b757-4ac4780b3628-calico-apiserver-certs\") pod \"calico-apiserver-6676f4b4fb-w9c8j\" (UID: \"4fea7350-6dea-4f40-b757-4ac4780b3628\") " pod="calico-apiserver/calico-apiserver-6676f4b4fb-w9c8j" Jul 7 06:16:56.542234 containerd[1734]: time="2025-07-07T06:16:56.541604321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pv69l,Uid:d40c33b8-3587-4ee3-89bf-e1ad24c997ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9525b8318f7097c4971c04b07dab5d39dcad95739f64d3d80ada09526ae073b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.543821 kubelet[3113]: E0707 06:16:56.543099 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9525b8318f7097c4971c04b07dab5d39dcad95739f64d3d80ada09526ae073b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.543821 kubelet[3113]: E0707 06:16:56.543231 3113 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9525b8318f7097c4971c04b07dab5d39dcad95739f64d3d80ada09526ae073b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pv69l" Jul 7 06:16:56.543821 kubelet[3113]: E0707 06:16:56.543568 3113 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9525b8318f7097c4971c04b07dab5d39dcad95739f64d3d80ada09526ae073b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pv69l" Jul 7 06:16:56.544084 kubelet[3113]: E0707 06:16:56.544051 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pv69l_calico-system(d40c33b8-3587-4ee3-89bf-e1ad24c997ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pv69l_calico-system(d40c33b8-3587-4ee3-89bf-e1ad24c997ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9525b8318f7097c4971c04b07dab5d39dcad95739f64d3d80ada09526ae073b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pv69l" podUID="d40c33b8-3587-4ee3-89bf-e1ad24c997ef" Jul 7 06:16:56.549720 containerd[1734]: time="2025-07-07T06:16:56.549669983Z" level=error msg="Failed to destroy network for sandbox \"973ac631dc02983934cd59e32ce1d153e9b220506dd45863167a74d72663eefd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.554985 containerd[1734]: time="2025-07-07T06:16:56.554886852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nw88c,Uid:3b1aa788-9a19-4746-9123-ae8c0cbae91b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"973ac631dc02983934cd59e32ce1d153e9b220506dd45863167a74d72663eefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.555258 kubelet[3113]: E0707 06:16:56.555124 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"973ac631dc02983934cd59e32ce1d153e9b220506dd45863167a74d72663eefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.555258 kubelet[3113]: E0707 06:16:56.555161 3113 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"973ac631dc02983934cd59e32ce1d153e9b220506dd45863167a74d72663eefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nw88c" Jul 7 06:16:56.555258 kubelet[3113]: E0707 06:16:56.555183 3113 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"973ac631dc02983934cd59e32ce1d153e9b220506dd45863167a74d72663eefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nw88c" Jul 7 06:16:56.555364 kubelet[3113]: E0707 06:16:56.555245 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nw88c_kube-system(3b1aa788-9a19-4746-9123-ae8c0cbae91b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nw88c_kube-system(3b1aa788-9a19-4746-9123-ae8c0cbae91b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"973ac631dc02983934cd59e32ce1d153e9b220506dd45863167a74d72663eefd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nw88c" podUID="3b1aa788-9a19-4746-9123-ae8c0cbae91b" Jul 7 06:16:56.555682 systemd[1]: run-netns-cni\x2d7ce3614f\x2d9533\x2d854f\x2d3538\x2da8de06827da7.mount: Deactivated successfully. Jul 7 06:16:56.612620 containerd[1734]: time="2025-07-07T06:16:56.612598012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-777c78c678-lb6k9,Uid:77f216df-f1a6-4265-a0aa-6eed0541752d,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:56.659386 containerd[1734]: time="2025-07-07T06:16:56.659297437Z" level=error msg="Failed to destroy network for sandbox \"4c89e71321e8d5e4e617ed0ec422771e2db2a36cd962df66264752ba896ab0a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.662279 containerd[1734]: time="2025-07-07T06:16:56.662243594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-777c78c678-lb6k9,Uid:77f216df-f1a6-4265-a0aa-6eed0541752d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c89e71321e8d5e4e617ed0ec422771e2db2a36cd962df66264752ba896ab0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.662444 kubelet[3113]: E0707 06:16:56.662408 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c89e71321e8d5e4e617ed0ec422771e2db2a36cd962df66264752ba896ab0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.663302 kubelet[3113]: E0707 06:16:56.662456 3113 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c89e71321e8d5e4e617ed0ec422771e2db2a36cd962df66264752ba896ab0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-777c78c678-lb6k9" Jul 7 06:16:56.663302 kubelet[3113]: E0707 06:16:56.663253 3113 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c89e71321e8d5e4e617ed0ec422771e2db2a36cd962df66264752ba896ab0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-777c78c678-lb6k9" Jul 7 06:16:56.663409 kubelet[3113]: E0707 06:16:56.663312 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-777c78c678-lb6k9_calico-system(77f216df-f1a6-4265-a0aa-6eed0541752d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-777c78c678-lb6k9_calico-system(77f216df-f1a6-4265-a0aa-6eed0541752d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c89e71321e8d5e4e617ed0ec422771e2db2a36cd962df66264752ba896ab0a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-777c78c678-lb6k9" podUID="77f216df-f1a6-4265-a0aa-6eed0541752d" Jul 7 06:16:56.741138 containerd[1734]: time="2025-07-07T06:16:56.741115822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565cbf9989-z97xc,Uid:77570737-2af5-46d1-a661-cdebb1903a96,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:56.756149 containerd[1734]: time="2025-07-07T06:16:56.755964196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xjf89,Uid:388c6467-152d-4b03-8422-51ecbced9736,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:56.757840 containerd[1734]: time="2025-07-07T06:16:56.757799860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6676f4b4fb-62vxk,Uid:a747d663-7dc8-4f34-ad5a-4c321e7b0a28,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:16:56.767113 containerd[1734]: time="2025-07-07T06:16:56.767086578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:16:56.772261 containerd[1734]: time="2025-07-07T06:16:56.772236039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6676f4b4fb-w9c8j,Uid:4fea7350-6dea-4f40-b757-4ac4780b3628,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:16:56.840370 containerd[1734]: time="2025-07-07T06:16:56.840341224Z" level=error msg="Failed to destroy network for sandbox \"06eb611bc042dc962f1b3b6306abdd771a808936bdda8e9d8410fc9807a5e738\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.849527 containerd[1734]: time="2025-07-07T06:16:56.849484193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565cbf9989-z97xc,Uid:77570737-2af5-46d1-a661-cdebb1903a96,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06eb611bc042dc962f1b3b6306abdd771a808936bdda8e9d8410fc9807a5e738\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.849840 kubelet[3113]: E0707 06:16:56.849771 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06eb611bc042dc962f1b3b6306abdd771a808936bdda8e9d8410fc9807a5e738\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.849907 kubelet[3113]: E0707 06:16:56.849837 3113 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06eb611bc042dc962f1b3b6306abdd771a808936bdda8e9d8410fc9807a5e738\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-565cbf9989-z97xc" Jul 7 06:16:56.849907 kubelet[3113]: E0707 06:16:56.849858 3113 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06eb611bc042dc962f1b3b6306abdd771a808936bdda8e9d8410fc9807a5e738\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-565cbf9989-z97xc" Jul 7 06:16:56.849965 kubelet[3113]: E0707 06:16:56.849904 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-565cbf9989-z97xc_calico-system(77570737-2af5-46d1-a661-cdebb1903a96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-565cbf9989-z97xc_calico-system(77570737-2af5-46d1-a661-cdebb1903a96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06eb611bc042dc962f1b3b6306abdd771a808936bdda8e9d8410fc9807a5e738\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-565cbf9989-z97xc" podUID="77570737-2af5-46d1-a661-cdebb1903a96" Jul 7 06:16:56.866277 containerd[1734]: time="2025-07-07T06:16:56.866106945Z" level=error msg="Failed to destroy network for sandbox \"43d9360c6f8a722b1ab946c3da3c2902eec6a8e2505408861987ecb65531ec1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.871680 containerd[1734]: time="2025-07-07T06:16:56.871648716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xjf89,Uid:388c6467-152d-4b03-8422-51ecbced9736,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d9360c6f8a722b1ab946c3da3c2902eec6a8e2505408861987ecb65531ec1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.872506 kubelet[3113]: E0707 06:16:56.872405 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d9360c6f8a722b1ab946c3da3c2902eec6a8e2505408861987ecb65531ec1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.872506 kubelet[3113]: E0707 06:16:56.872452 3113 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d9360c6f8a722b1ab946c3da3c2902eec6a8e2505408861987ecb65531ec1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xjf89" Jul 7 06:16:56.872506 kubelet[3113]: E0707 06:16:56.872473 3113 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d9360c6f8a722b1ab946c3da3c2902eec6a8e2505408861987ecb65531ec1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xjf89" Jul 7 06:16:56.872647 kubelet[3113]: E0707 06:16:56.872516 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-xjf89_calico-system(388c6467-152d-4b03-8422-51ecbced9736)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-xjf89_calico-system(388c6467-152d-4b03-8422-51ecbced9736)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43d9360c6f8a722b1ab946c3da3c2902eec6a8e2505408861987ecb65531ec1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-xjf89" podUID="388c6467-152d-4b03-8422-51ecbced9736" Jul 7 06:16:56.893344 containerd[1734]: time="2025-07-07T06:16:56.893300255Z" level=error msg="Failed to destroy network for sandbox \"dc8cbad7b95bd82f02ecc38ca2d67f31a790edebe5d761bdfe5d4d088e3dc91c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.895473 containerd[1734]: time="2025-07-07T06:16:56.895437827Z" level=error msg="Failed to destroy network for sandbox \"1af914caaceef04c55114301a660742ee4b981dc186dfb1eddb9cf86be769dc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.897047 containerd[1734]: time="2025-07-07T06:16:56.897015178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6676f4b4fb-w9c8j,Uid:4fea7350-6dea-4f40-b757-4ac4780b3628,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc8cbad7b95bd82f02ecc38ca2d67f31a790edebe5d761bdfe5d4d088e3dc91c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.897289 kubelet[3113]: E0707 06:16:56.897252 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc8cbad7b95bd82f02ecc38ca2d67f31a790edebe5d761bdfe5d4d088e3dc91c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.897347 kubelet[3113]: E0707 06:16:56.897295 3113 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc8cbad7b95bd82f02ecc38ca2d67f31a790edebe5d761bdfe5d4d088e3dc91c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6676f4b4fb-w9c8j" Jul 7 06:16:56.897347 kubelet[3113]: E0707 06:16:56.897318 3113 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc8cbad7b95bd82f02ecc38ca2d67f31a790edebe5d761bdfe5d4d088e3dc91c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6676f4b4fb-w9c8j" Jul 7 06:16:56.897533 kubelet[3113]: E0707 06:16:56.897368 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6676f4b4fb-w9c8j_calico-apiserver(4fea7350-6dea-4f40-b757-4ac4780b3628)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6676f4b4fb-w9c8j_calico-apiserver(4fea7350-6dea-4f40-b757-4ac4780b3628)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc8cbad7b95bd82f02ecc38ca2d67f31a790edebe5d761bdfe5d4d088e3dc91c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6676f4b4fb-w9c8j" podUID="4fea7350-6dea-4f40-b757-4ac4780b3628" Jul 7 06:16:56.902003 containerd[1734]: time="2025-07-07T06:16:56.901967276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6676f4b4fb-62vxk,Uid:a747d663-7dc8-4f34-ad5a-4c321e7b0a28,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1af914caaceef04c55114301a660742ee4b981dc186dfb1eddb9cf86be769dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.902147 kubelet[3113]: E0707 06:16:56.902099 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1af914caaceef04c55114301a660742ee4b981dc186dfb1eddb9cf86be769dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:56.902147 kubelet[3113]: E0707 06:16:56.902134 3113 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1af914caaceef04c55114301a660742ee4b981dc186dfb1eddb9cf86be769dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6676f4b4fb-62vxk" Jul 7 06:16:56.902232 kubelet[3113]: E0707 06:16:56.902157 3113 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1af914caaceef04c55114301a660742ee4b981dc186dfb1eddb9cf86be769dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6676f4b4fb-62vxk" Jul 7 06:16:56.902232 kubelet[3113]: E0707 06:16:56.902200 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6676f4b4fb-62vxk_calico-apiserver(a747d663-7dc8-4f34-ad5a-4c321e7b0a28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6676f4b4fb-62vxk_calico-apiserver(a747d663-7dc8-4f34-ad5a-4c321e7b0a28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1af914caaceef04c55114301a660742ee4b981dc186dfb1eddb9cf86be769dc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6676f4b4fb-62vxk" podUID="a747d663-7dc8-4f34-ad5a-4c321e7b0a28" Jul 7 06:16:59.700468 kubelet[3113]: I0707 06:16:59.700427 3113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:17:00.921945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount373630550.mount: Deactivated successfully. Jul 7 06:17:00.969793 containerd[1734]: time="2025-07-07T06:17:00.969737967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:00.972384 containerd[1734]: time="2025-07-07T06:17:00.972346178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 06:17:00.978296 containerd[1734]: time="2025-07-07T06:17:00.978250733Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:00.982185 containerd[1734]: time="2025-07-07T06:17:00.982138517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:00.982685 containerd[1734]: time="2025-07-07T06:17:00.982419402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.215295415s" Jul 7 06:17:00.982685 containerd[1734]: time="2025-07-07T06:17:00.982449303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 06:17:01.001976 containerd[1734]: time="2025-07-07T06:17:01.001940558Z" level=info msg="CreateContainer within sandbox \"4d711be28e44fde4d63c2749a3583bb2b0f156c6a4d84d18ba93f30efc7bd6d1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:17:01.026138 containerd[1734]: time="2025-07-07T06:17:01.025663828Z" level=info msg="Container 734a0c07f1d95e65f885909861cb3671b22c34c1b100a394cdae81d8bb0d079c: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:01.027467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257886310.mount: Deactivated successfully. Jul 7 06:17:01.050154 containerd[1734]: time="2025-07-07T06:17:01.050124567Z" level=info msg="CreateContainer within sandbox \"4d711be28e44fde4d63c2749a3583bb2b0f156c6a4d84d18ba93f30efc7bd6d1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"734a0c07f1d95e65f885909861cb3671b22c34c1b100a394cdae81d8bb0d079c\"" Jul 7 06:17:01.050829 containerd[1734]: time="2025-07-07T06:17:01.050584011Z" level=info msg="StartContainer for \"734a0c07f1d95e65f885909861cb3671b22c34c1b100a394cdae81d8bb0d079c\"" Jul 7 06:17:01.052249 containerd[1734]: time="2025-07-07T06:17:01.052209902Z" level=info msg="connecting to shim 734a0c07f1d95e65f885909861cb3671b22c34c1b100a394cdae81d8bb0d079c" address="unix:///run/containerd/s/f62fb51d02b73ee060ac512813b643ca8771eec41a7c3287a8eabb1c07d5f052" protocol=ttrpc version=3 Jul 7 06:17:01.078942 systemd[1]: Started cri-containerd-734a0c07f1d95e65f885909861cb3671b22c34c1b100a394cdae81d8bb0d079c.scope - libcontainer container 734a0c07f1d95e65f885909861cb3671b22c34c1b100a394cdae81d8bb0d079c. Jul 7 06:17:01.114505 containerd[1734]: time="2025-07-07T06:17:01.114468462Z" level=info msg="StartContainer for \"734a0c07f1d95e65f885909861cb3671b22c34c1b100a394cdae81d8bb0d079c\" returns successfully" Jul 7 06:17:01.302342 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:17:01.302439 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:17:01.469997 kubelet[3113]: I0707 06:17:01.469913 3113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77f216df-f1a6-4265-a0aa-6eed0541752d-whisker-backend-key-pair\") pod \"77f216df-f1a6-4265-a0aa-6eed0541752d\" (UID: \"77f216df-f1a6-4265-a0aa-6eed0541752d\") " Jul 7 06:17:01.470325 kubelet[3113]: I0707 06:17:01.470067 3113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77f216df-f1a6-4265-a0aa-6eed0541752d-whisker-ca-bundle\") pod \"77f216df-f1a6-4265-a0aa-6eed0541752d\" (UID: \"77f216df-f1a6-4265-a0aa-6eed0541752d\") " Jul 7 06:17:01.470325 kubelet[3113]: I0707 06:17:01.470105 3113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9fcz\" (UniqueName: \"kubernetes.io/projected/77f216df-f1a6-4265-a0aa-6eed0541752d-kube-api-access-j9fcz\") pod \"77f216df-f1a6-4265-a0aa-6eed0541752d\" (UID: \"77f216df-f1a6-4265-a0aa-6eed0541752d\") " Jul 7 06:17:01.472310 kubelet[3113]: I0707 06:17:01.472259 3113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77f216df-f1a6-4265-a0aa-6eed0541752d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "77f216df-f1a6-4265-a0aa-6eed0541752d" (UID: "77f216df-f1a6-4265-a0aa-6eed0541752d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:17:01.475734 kubelet[3113]: I0707 06:17:01.475703 3113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f216df-f1a6-4265-a0aa-6eed0541752d-kube-api-access-j9fcz" (OuterVolumeSpecName: "kube-api-access-j9fcz") pod "77f216df-f1a6-4265-a0aa-6eed0541752d" (UID: "77f216df-f1a6-4265-a0aa-6eed0541752d"). InnerVolumeSpecName "kube-api-access-j9fcz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:17:01.475838 kubelet[3113]: I0707 06:17:01.475787 3113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77f216df-f1a6-4265-a0aa-6eed0541752d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "77f216df-f1a6-4265-a0aa-6eed0541752d" (UID: "77f216df-f1a6-4265-a0aa-6eed0541752d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:17:01.570429 kubelet[3113]: I0707 06:17:01.570401 3113 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77f216df-f1a6-4265-a0aa-6eed0541752d-whisker-backend-key-pair\") on node \"ci-4372.0.1-a-ca7a3a169f\" DevicePath \"\"" Jul 7 06:17:01.570523 kubelet[3113]: I0707 06:17:01.570431 3113 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77f216df-f1a6-4265-a0aa-6eed0541752d-whisker-ca-bundle\") on node \"ci-4372.0.1-a-ca7a3a169f\" DevicePath \"\"" Jul 7 06:17:01.570523 kubelet[3113]: I0707 06:17:01.570453 3113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j9fcz\" (UniqueName: \"kubernetes.io/projected/77f216df-f1a6-4265-a0aa-6eed0541752d-kube-api-access-j9fcz\") on node \"ci-4372.0.1-a-ca7a3a169f\" DevicePath \"\"" Jul 7 06:17:01.780089 systemd[1]: Removed slice kubepods-besteffort-pod77f216df_f1a6_4265_a0aa_6eed0541752d.slice - libcontainer container kubepods-besteffort-pod77f216df_f1a6_4265_a0aa_6eed0541752d.slice. Jul 7 06:17:01.791911 kubelet[3113]: I0707 06:17:01.791855 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qmlst" podStartSLOduration=1.430601007 podStartE2EDuration="16.791839094s" podCreationTimestamp="2025-07-07 06:16:45 +0000 UTC" firstStartedPulling="2025-07-07 06:16:45.621840684 +0000 UTC m=+19.078286401" lastFinishedPulling="2025-07-07 06:17:00.983078767 +0000 UTC m=+34.439524488" observedRunningTime="2025-07-07 06:17:01.79127786 +0000 UTC m=+35.247723603" watchObservedRunningTime="2025-07-07 06:17:01.791839094 +0000 UTC m=+35.248284813" Jul 7 06:17:01.865168 systemd[1]: Created slice kubepods-besteffort-pod5840f83e_c37d_471b_9a19_88223e60a21a.slice - libcontainer container kubepods-besteffort-pod5840f83e_c37d_471b_9a19_88223e60a21a.slice. Jul 7 06:17:01.872041 kubelet[3113]: I0707 06:17:01.872011 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5840f83e-c37d-471b-9a19-88223e60a21a-whisker-ca-bundle\") pod \"whisker-d899679c8-jh7jd\" (UID: \"5840f83e-c37d-471b-9a19-88223e60a21a\") " pod="calico-system/whisker-d899679c8-jh7jd" Jul 7 06:17:01.873920 kubelet[3113]: I0707 06:17:01.873894 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5840f83e-c37d-471b-9a19-88223e60a21a-whisker-backend-key-pair\") pod \"whisker-d899679c8-jh7jd\" (UID: \"5840f83e-c37d-471b-9a19-88223e60a21a\") " pod="calico-system/whisker-d899679c8-jh7jd" Jul 7 06:17:01.874031 kubelet[3113]: I0707 06:17:01.874021 3113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk79f\" (UniqueName: \"kubernetes.io/projected/5840f83e-c37d-471b-9a19-88223e60a21a-kube-api-access-pk79f\") pod \"whisker-d899679c8-jh7jd\" (UID: \"5840f83e-c37d-471b-9a19-88223e60a21a\") " pod="calico-system/whisker-d899679c8-jh7jd" Jul 7 06:17:01.924169 systemd[1]: var-lib-kubelet-pods-77f216df\x2df1a6\x2d4265\x2da0aa\x2d6eed0541752d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj9fcz.mount: Deactivated successfully. Jul 7 06:17:01.924263 systemd[1]: var-lib-kubelet-pods-77f216df\x2df1a6\x2d4265\x2da0aa\x2d6eed0541752d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:17:02.169782 containerd[1734]: time="2025-07-07T06:17:02.169666408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d899679c8-jh7jd,Uid:5840f83e-c37d-471b-9a19-88223e60a21a,Namespace:calico-system,Attempt:0,}" Jul 7 06:17:02.292052 systemd-networkd[1361]: calief136e8e05f: Link UP Jul 7 06:17:02.292969 systemd-networkd[1361]: calief136e8e05f: Gained carrier Jul 7 06:17:02.308132 containerd[1734]: 2025-07-07 06:17:02.202 [INFO][4254] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:17:02.308132 containerd[1734]: 2025-07-07 06:17:02.212 [INFO][4254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-eth0 whisker-d899679c8- calico-system 5840f83e-c37d-471b-9a19-88223e60a21a 912 0 2025-07-07 06:17:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d899679c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.0.1-a-ca7a3a169f whisker-d899679c8-jh7jd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calief136e8e05f [] [] }} ContainerID="b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" Namespace="calico-system" Pod="whisker-d899679c8-jh7jd" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-" Jul 7 06:17:02.308132 containerd[1734]: 2025-07-07 06:17:02.212 [INFO][4254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" Namespace="calico-system" Pod="whisker-d899679c8-jh7jd" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-eth0" Jul 7 06:17:02.308132 containerd[1734]: 2025-07-07 06:17:02.234 [INFO][4265] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" HandleID="k8s-pod-network.b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-eth0" Jul 7 06:17:02.308373 containerd[1734]: 2025-07-07 06:17:02.234 [INFO][4265] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" HandleID="k8s-pod-network.b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-ca7a3a169f", "pod":"whisker-d899679c8-jh7jd", "timestamp":"2025-07-07 06:17:02.234597532 +0000 UTC"}, Hostname:"ci-4372.0.1-a-ca7a3a169f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:17:02.308373 containerd[1734]: 2025-07-07 06:17:02.234 [INFO][4265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:17:02.308373 containerd[1734]: 2025-07-07 06:17:02.234 [INFO][4265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:17:02.308373 containerd[1734]: 2025-07-07 06:17:02.234 [INFO][4265] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-ca7a3a169f' Jul 7 06:17:02.308373 containerd[1734]: 2025-07-07 06:17:02.240 [INFO][4265] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:02.308373 containerd[1734]: 2025-07-07 06:17:02.243 [INFO][4265] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:02.308373 containerd[1734]: 2025-07-07 06:17:02.248 [INFO][4265] ipam/ipam.go 511: Trying affinity for 192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:02.308373 containerd[1734]: 2025-07-07 06:17:02.249 [INFO][4265] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:02.308373 containerd[1734]: 2025-07-07 06:17:02.250 [INFO][4265] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:02.308671 containerd[1734]: 2025-07-07 06:17:02.250 [INFO][4265] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:02.308671 containerd[1734]: 2025-07-07 06:17:02.251 [INFO][4265] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3 Jul 7 06:17:02.308671 containerd[1734]: 2025-07-07 06:17:02.255 [INFO][4265] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:02.308671 containerd[1734]: 2025-07-07 06:17:02.266 [INFO][4265] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.193/26] block=192.168.90.192/26 handle="k8s-pod-network.b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:02.308671 containerd[1734]: 2025-07-07 06:17:02.266 [INFO][4265] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.193/26] handle="k8s-pod-network.b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:02.308671 containerd[1734]: 2025-07-07 06:17:02.266 [INFO][4265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:17:02.308671 containerd[1734]: 2025-07-07 06:17:02.266 [INFO][4265] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.193/26] IPv6=[] ContainerID="b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" HandleID="k8s-pod-network.b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-eth0" Jul 7 06:17:02.308889 containerd[1734]: 2025-07-07 06:17:02.268 [INFO][4254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" Namespace="calico-system" Pod="whisker-d899679c8-jh7jd" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-eth0", GenerateName:"whisker-d899679c8-", Namespace:"calico-system", SelfLink:"", UID:"5840f83e-c37d-471b-9a19-88223e60a21a", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d899679c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"", Pod:"whisker-d899679c8-jh7jd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.90.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calief136e8e05f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:02.308889 containerd[1734]: 2025-07-07 06:17:02.268 [INFO][4254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.193/32] ContainerID="b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" Namespace="calico-system" Pod="whisker-d899679c8-jh7jd" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-eth0" Jul 7 06:17:02.309034 containerd[1734]: 2025-07-07 06:17:02.268 [INFO][4254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief136e8e05f ContainerID="b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" Namespace="calico-system" Pod="whisker-d899679c8-jh7jd" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-eth0" Jul 7 06:17:02.309034 containerd[1734]: 2025-07-07 06:17:02.291 [INFO][4254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" Namespace="calico-system" Pod="whisker-d899679c8-jh7jd" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-eth0" Jul 7 06:17:02.309107 containerd[1734]: 2025-07-07 06:17:02.293 [INFO][4254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" Namespace="calico-system" Pod="whisker-d899679c8-jh7jd" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-eth0", GenerateName:"whisker-d899679c8-", Namespace:"calico-system", SelfLink:"", UID:"5840f83e-c37d-471b-9a19-88223e60a21a", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d899679c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3", Pod:"whisker-d899679c8-jh7jd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.90.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calief136e8e05f", MAC:"86:c3:36:89:0a:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:02.309186 containerd[1734]: 2025-07-07 06:17:02.305 [INFO][4254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" Namespace="calico-system" Pod="whisker-d899679c8-jh7jd" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-whisker--d899679c8--jh7jd-eth0" Jul 7 06:17:02.348097 containerd[1734]: time="2025-07-07T06:17:02.348048320Z" level=info msg="connecting to shim b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3" address="unix:///run/containerd/s/477bac799aa5c06321c6563914a9d27a7944be739a60cb4bdd227e881b5939ad" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:17:02.363961 systemd[1]: Started cri-containerd-b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3.scope - libcontainer container b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3. Jul 7 06:17:02.409396 containerd[1734]: time="2025-07-07T06:17:02.409363322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d899679c8-jh7jd,Uid:5840f83e-c37d-471b-9a19-88223e60a21a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3\"" Jul 7 06:17:02.411082 containerd[1734]: time="2025-07-07T06:17:02.410733280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:17:02.639083 kubelet[3113]: I0707 06:17:02.638174 3113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77f216df-f1a6-4265-a0aa-6eed0541752d" path="/var/lib/kubelet/pods/77f216df-f1a6-4265-a0aa-6eed0541752d/volumes" Jul 7 06:17:03.151741 systemd-networkd[1361]: vxlan.calico: Link UP Jul 7 06:17:03.151751 systemd-networkd[1361]: vxlan.calico: Gained carrier Jul 7 06:17:03.470917 systemd-networkd[1361]: calief136e8e05f: Gained IPv6LL Jul 7 06:17:03.608526 containerd[1734]: time="2025-07-07T06:17:03.608477575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:03.611468 containerd[1734]: time="2025-07-07T06:17:03.611443113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 06:17:03.615168 containerd[1734]: time="2025-07-07T06:17:03.615113242Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:03.619173 containerd[1734]: time="2025-07-07T06:17:03.619126389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:03.619861 containerd[1734]: time="2025-07-07T06:17:03.619497569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.208714752s" Jul 7 06:17:03.619861 containerd[1734]: time="2025-07-07T06:17:03.619532363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 06:17:03.625945 containerd[1734]: time="2025-07-07T06:17:03.625909030Z" level=info msg="CreateContainer within sandbox \"b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:17:03.657513 containerd[1734]: time="2025-07-07T06:17:03.655915130Z" level=info msg="Container e9100a2625ea1efae5a7ecad6db2020bd055354380dfd0c3e7737f13911ebbdb: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:03.680582 containerd[1734]: time="2025-07-07T06:17:03.680555230Z" level=info msg="CreateContainer within sandbox \"b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e9100a2625ea1efae5a7ecad6db2020bd055354380dfd0c3e7737f13911ebbdb\"" Jul 7 06:17:03.681009 containerd[1734]: time="2025-07-07T06:17:03.680978956Z" level=info msg="StartContainer for \"e9100a2625ea1efae5a7ecad6db2020bd055354380dfd0c3e7737f13911ebbdb\"" Jul 7 06:17:03.682134 containerd[1734]: time="2025-07-07T06:17:03.682108784Z" level=info msg="connecting to shim e9100a2625ea1efae5a7ecad6db2020bd055354380dfd0c3e7737f13911ebbdb" address="unix:///run/containerd/s/477bac799aa5c06321c6563914a9d27a7944be739a60cb4bdd227e881b5939ad" protocol=ttrpc version=3 Jul 7 06:17:03.700965 systemd[1]: Started cri-containerd-e9100a2625ea1efae5a7ecad6db2020bd055354380dfd0c3e7737f13911ebbdb.scope - libcontainer container e9100a2625ea1efae5a7ecad6db2020bd055354380dfd0c3e7737f13911ebbdb. Jul 7 06:17:03.743709 containerd[1734]: time="2025-07-07T06:17:03.743584088Z" level=info msg="StartContainer for \"e9100a2625ea1efae5a7ecad6db2020bd055354380dfd0c3e7737f13911ebbdb\" returns successfully" Jul 7 06:17:03.746040 containerd[1734]: time="2025-07-07T06:17:03.746008372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:17:04.622999 systemd-networkd[1361]: vxlan.calico: Gained IPv6LL Jul 7 06:17:05.518407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299680910.mount: Deactivated successfully. Jul 7 06:17:05.596503 containerd[1734]: time="2025-07-07T06:17:05.596456802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:05.603410 containerd[1734]: time="2025-07-07T06:17:05.603368406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 06:17:05.609871 containerd[1734]: time="2025-07-07T06:17:05.609816383Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:05.613911 containerd[1734]: time="2025-07-07T06:17:05.613857725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:05.614484 containerd[1734]: time="2025-07-07T06:17:05.614330829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.868204183s" Jul 7 06:17:05.614484 containerd[1734]: time="2025-07-07T06:17:05.614365982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 06:17:05.623323 containerd[1734]: time="2025-07-07T06:17:05.623282383Z" level=info msg="CreateContainer within sandbox \"b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:17:05.651965 containerd[1734]: time="2025-07-07T06:17:05.651934174Z" level=info msg="Container 18ed4ba701198d6743164fe0625b8d66582a06a6ea8cc4a1a4d3e4a424c13c4b: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:05.670032 containerd[1734]: time="2025-07-07T06:17:05.669986105Z" level=info msg="CreateContainer within sandbox \"b17f5c65370846199683d945824460100019a868d3eddc843ab39452ed15d3f3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"18ed4ba701198d6743164fe0625b8d66582a06a6ea8cc4a1a4d3e4a424c13c4b\"" Jul 7 06:17:05.671633 containerd[1734]: time="2025-07-07T06:17:05.670445352Z" level=info msg="StartContainer for \"18ed4ba701198d6743164fe0625b8d66582a06a6ea8cc4a1a4d3e4a424c13c4b\"" Jul 7 06:17:05.671633 containerd[1734]: time="2025-07-07T06:17:05.671510795Z" level=info msg="connecting to shim 18ed4ba701198d6743164fe0625b8d66582a06a6ea8cc4a1a4d3e4a424c13c4b" address="unix:///run/containerd/s/477bac799aa5c06321c6563914a9d27a7944be739a60cb4bdd227e881b5939ad" protocol=ttrpc version=3 Jul 7 06:17:05.695974 systemd[1]: Started cri-containerd-18ed4ba701198d6743164fe0625b8d66582a06a6ea8cc4a1a4d3e4a424c13c4b.scope - libcontainer container 18ed4ba701198d6743164fe0625b8d66582a06a6ea8cc4a1a4d3e4a424c13c4b. Jul 7 06:17:05.744572 containerd[1734]: time="2025-07-07T06:17:05.744536546Z" level=info msg="StartContainer for \"18ed4ba701198d6743164fe0625b8d66582a06a6ea8cc4a1a4d3e4a424c13c4b\" returns successfully" Jul 7 06:17:07.166840 kubelet[3113]: I0707 06:17:07.166752 3113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:17:07.230584 containerd[1734]: time="2025-07-07T06:17:07.230532549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"734a0c07f1d95e65f885909861cb3671b22c34c1b100a394cdae81d8bb0d079c\" id:\"0790997a5dfdb42f9af23875a85cd73310b3b48fd75337b145dda9acd871a8c0\" pid:4614 exited_at:{seconds:1751869027 nanos:230220810}" Jul 7 06:17:07.254782 kubelet[3113]: I0707 06:17:07.254671 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-d899679c8-jh7jd" podStartSLOduration=3.049858386 podStartE2EDuration="6.254652207s" podCreationTimestamp="2025-07-07 06:17:01 +0000 UTC" firstStartedPulling="2025-07-07 06:17:02.410482328 +0000 UTC m=+35.866928039" lastFinishedPulling="2025-07-07 06:17:05.615276145 +0000 UTC m=+39.071721860" observedRunningTime="2025-07-07 06:17:05.79989938 +0000 UTC m=+39.256345118" watchObservedRunningTime="2025-07-07 06:17:07.254652207 +0000 UTC m=+40.711097923" Jul 7 06:17:07.298772 containerd[1734]: time="2025-07-07T06:17:07.298738066Z" level=info msg="TaskExit event in podsandbox handler container_id:\"734a0c07f1d95e65f885909861cb3671b22c34c1b100a394cdae81d8bb0d079c\" id:\"7ba51415f252263c859ce430844ba1db83485200748c605075eb2597af10df63\" pid:4637 exited_at:{seconds:1751869027 nanos:298548220}" Jul 7 06:17:08.635118 containerd[1734]: time="2025-07-07T06:17:08.634984920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xjf89,Uid:388c6467-152d-4b03-8422-51ecbced9736,Namespace:calico-system,Attempt:0,}" Jul 7 06:17:08.730788 systemd-networkd[1361]: cali65455388e21: Link UP Jul 7 06:17:08.732007 systemd-networkd[1361]: cali65455388e21: Gained carrier Jul 7 06:17:08.749013 containerd[1734]: 2025-07-07 06:17:08.671 [INFO][4658] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-eth0 goldmane-768f4c5c69- calico-system 388c6467-152d-4b03-8422-51ecbced9736 846 0 2025-07-07 06:16:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.0.1-a-ca7a3a169f goldmane-768f4c5c69-xjf89 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali65455388e21 [] [] }} ContainerID="97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjf89" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-" Jul 7 06:17:08.749013 containerd[1734]: 2025-07-07 06:17:08.671 [INFO][4658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjf89" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-eth0" Jul 7 06:17:08.749013 containerd[1734]: 2025-07-07 06:17:08.694 [INFO][4669] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" HandleID="k8s-pod-network.97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-eth0" Jul 7 06:17:08.749317 containerd[1734]: 2025-07-07 06:17:08.694 [INFO][4669] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" HandleID="k8s-pod-network.97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-ca7a3a169f", "pod":"goldmane-768f4c5c69-xjf89", "timestamp":"2025-07-07 06:17:08.694496502 +0000 UTC"}, Hostname:"ci-4372.0.1-a-ca7a3a169f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:17:08.749317 containerd[1734]: 2025-07-07 06:17:08.694 [INFO][4669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:17:08.749317 containerd[1734]: 2025-07-07 06:17:08.694 [INFO][4669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:17:08.749317 containerd[1734]: 2025-07-07 06:17:08.694 [INFO][4669] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-ca7a3a169f' Jul 7 06:17:08.749317 containerd[1734]: 2025-07-07 06:17:08.700 [INFO][4669] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:08.749317 containerd[1734]: 2025-07-07 06:17:08.706 [INFO][4669] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:08.749317 containerd[1734]: 2025-07-07 06:17:08.709 [INFO][4669] ipam/ipam.go 511: Trying affinity for 192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:08.749317 containerd[1734]: 2025-07-07 06:17:08.710 [INFO][4669] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:08.749317 containerd[1734]: 2025-07-07 06:17:08.712 [INFO][4669] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:08.749570 containerd[1734]: 2025-07-07 06:17:08.712 [INFO][4669] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:08.749570 containerd[1734]: 2025-07-07 06:17:08.713 [INFO][4669] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0 Jul 7 06:17:08.749570 containerd[1734]: 2025-07-07 06:17:08.717 [INFO][4669] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:08.749570 containerd[1734]: 2025-07-07 06:17:08.725 [INFO][4669] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.194/26] block=192.168.90.192/26 handle="k8s-pod-network.97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:08.749570 containerd[1734]: 2025-07-07 06:17:08.726 [INFO][4669] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.194/26] handle="k8s-pod-network.97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:08.749570 containerd[1734]: 2025-07-07 06:17:08.726 [INFO][4669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:17:08.749570 containerd[1734]: 2025-07-07 06:17:08.726 [INFO][4669] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.194/26] IPv6=[] ContainerID="97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" HandleID="k8s-pod-network.97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-eth0" Jul 7 06:17:08.750397 containerd[1734]: 2025-07-07 06:17:08.728 [INFO][4658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjf89" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"388c6467-152d-4b03-8422-51ecbced9736", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"", Pod:"goldmane-768f4c5c69-xjf89", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.90.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali65455388e21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:08.750397 containerd[1734]: 2025-07-07 06:17:08.728 [INFO][4658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.194/32] ContainerID="97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjf89" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-eth0" Jul 7 06:17:08.750699 containerd[1734]: 2025-07-07 06:17:08.728 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65455388e21 ContainerID="97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjf89" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-eth0" Jul 7 06:17:08.750699 containerd[1734]: 2025-07-07 06:17:08.732 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjf89" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-eth0" Jul 7 06:17:08.750748 containerd[1734]: 2025-07-07 06:17:08.733 [INFO][4658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjf89" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"388c6467-152d-4b03-8422-51ecbced9736", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0", Pod:"goldmane-768f4c5c69-xjf89", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.90.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali65455388e21", MAC:"12:b6:96:e1:30:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:08.752186 containerd[1734]: 2025-07-07 06:17:08.745 [INFO][4658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjf89" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-goldmane--768f4c5c69--xjf89-eth0" Jul 7 06:17:08.810242 containerd[1734]: time="2025-07-07T06:17:08.810189163Z" level=info msg="connecting to shim 97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0" address="unix:///run/containerd/s/5bfc36fc5a47f37b18e14b96cec3686801ec53469f011973ceddf962f45d7dc4" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:17:08.834945 systemd[1]: Started cri-containerd-97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0.scope - libcontainer container 97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0. Jul 7 06:17:08.876162 containerd[1734]: time="2025-07-07T06:17:08.876118986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xjf89,Uid:388c6467-152d-4b03-8422-51ecbced9736,Namespace:calico-system,Attempt:0,} returns sandbox id \"97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0\"" Jul 7 06:17:08.877498 containerd[1734]: time="2025-07-07T06:17:08.877475521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:17:09.634261 containerd[1734]: time="2025-07-07T06:17:09.634195453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pv69l,Uid:d40c33b8-3587-4ee3-89bf-e1ad24c997ef,Namespace:calico-system,Attempt:0,}" Jul 7 06:17:09.749405 systemd-networkd[1361]: caliac57c0616ac: Link UP Jul 7 06:17:09.750920 systemd-networkd[1361]: caliac57c0616ac: Gained carrier Jul 7 06:17:09.770295 containerd[1734]: 2025-07-07 06:17:09.684 [INFO][4732] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-eth0 csi-node-driver- calico-system d40c33b8-3587-4ee3-89bf-e1ad24c997ef 738 0 2025-07-07 06:16:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.0.1-a-ca7a3a169f csi-node-driver-pv69l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliac57c0616ac [] [] }} ContainerID="76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" Namespace="calico-system" Pod="csi-node-driver-pv69l" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-" Jul 7 06:17:09.770295 containerd[1734]: 2025-07-07 06:17:09.685 [INFO][4732] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" Namespace="calico-system" Pod="csi-node-driver-pv69l" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-eth0" Jul 7 06:17:09.770295 containerd[1734]: 2025-07-07 06:17:09.712 [INFO][4745] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" HandleID="k8s-pod-network.76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-eth0" Jul 7 06:17:09.770746 containerd[1734]: 2025-07-07 06:17:09.713 [INFO][4745] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" HandleID="k8s-pod-network.76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f250), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-ca7a3a169f", "pod":"csi-node-driver-pv69l", "timestamp":"2025-07-07 06:17:09.712882312 +0000 UTC"}, Hostname:"ci-4372.0.1-a-ca7a3a169f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:17:09.770746 containerd[1734]: 2025-07-07 06:17:09.713 [INFO][4745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:17:09.770746 containerd[1734]: 2025-07-07 06:17:09.713 [INFO][4745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:17:09.770746 containerd[1734]: 2025-07-07 06:17:09.713 [INFO][4745] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-ca7a3a169f' Jul 7 06:17:09.770746 containerd[1734]: 2025-07-07 06:17:09.719 [INFO][4745] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:09.770746 containerd[1734]: 2025-07-07 06:17:09.723 [INFO][4745] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:09.770746 containerd[1734]: 2025-07-07 06:17:09.726 [INFO][4745] ipam/ipam.go 511: Trying affinity for 192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:09.770746 containerd[1734]: 2025-07-07 06:17:09.728 [INFO][4745] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:09.770746 containerd[1734]: 2025-07-07 06:17:09.729 [INFO][4745] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:09.772093 containerd[1734]: 2025-07-07 06:17:09.729 [INFO][4745] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:09.772093 containerd[1734]: 2025-07-07 06:17:09.730 [INFO][4745] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c Jul 7 06:17:09.772093 containerd[1734]: 2025-07-07 06:17:09.736 [INFO][4745] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:09.772093 containerd[1734]: 2025-07-07 06:17:09.744 [INFO][4745] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.195/26] block=192.168.90.192/26 handle="k8s-pod-network.76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:09.772093 containerd[1734]: 2025-07-07 06:17:09.744 [INFO][4745] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.195/26] handle="k8s-pod-network.76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:09.772093 containerd[1734]: 2025-07-07 06:17:09.744 [INFO][4745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:17:09.772093 containerd[1734]: 2025-07-07 06:17:09.744 [INFO][4745] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.195/26] IPv6=[] ContainerID="76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" HandleID="k8s-pod-network.76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-eth0" Jul 7 06:17:09.772263 containerd[1734]: 2025-07-07 06:17:09.745 [INFO][4732] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" Namespace="calico-system" Pod="csi-node-driver-pv69l" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d40c33b8-3587-4ee3-89bf-e1ad24c997ef", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"", Pod:"csi-node-driver-pv69l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac57c0616ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:09.772344 containerd[1734]: 2025-07-07 06:17:09.746 [INFO][4732] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.195/32] ContainerID="76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" Namespace="calico-system" Pod="csi-node-driver-pv69l" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-eth0" Jul 7 06:17:09.772344 containerd[1734]: 2025-07-07 06:17:09.746 [INFO][4732] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac57c0616ac ContainerID="76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" Namespace="calico-system" Pod="csi-node-driver-pv69l" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-eth0" Jul 7 06:17:09.772344 containerd[1734]: 2025-07-07 06:17:09.750 [INFO][4732] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" Namespace="calico-system" Pod="csi-node-driver-pv69l" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-eth0" Jul 7 06:17:09.772416 containerd[1734]: 2025-07-07 06:17:09.752 [INFO][4732] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" Namespace="calico-system" Pod="csi-node-driver-pv69l" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d40c33b8-3587-4ee3-89bf-e1ad24c997ef", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c", Pod:"csi-node-driver-pv69l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac57c0616ac", MAC:"fe:d2:41:19:f9:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:09.772474 containerd[1734]: 2025-07-07 06:17:09.766 [INFO][4732] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" Namespace="calico-system" Pod="csi-node-driver-pv69l" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-csi--node--driver--pv69l-eth0" Jul 7 06:17:09.824532 containerd[1734]: time="2025-07-07T06:17:09.824487652Z" level=info msg="connecting to shim 76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c" address="unix:///run/containerd/s/f170e447c98e0533fc8c2f2289f960216d24088ee0343d5c6b0889b9fe3ee8dd" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:17:09.857102 systemd[1]: Started cri-containerd-76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c.scope - libcontainer container 76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c. Jul 7 06:17:09.870953 systemd-networkd[1361]: cali65455388e21: Gained IPv6LL Jul 7 06:17:09.938695 containerd[1734]: time="2025-07-07T06:17:09.938550403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pv69l,Uid:d40c33b8-3587-4ee3-89bf-e1ad24c997ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c\"" Jul 7 06:17:10.635871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184067452.mount: Deactivated successfully. Jul 7 06:17:10.638878 containerd[1734]: time="2025-07-07T06:17:10.638677059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6676f4b4fb-w9c8j,Uid:4fea7350-6dea-4f40-b757-4ac4780b3628,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:17:10.640722 containerd[1734]: time="2025-07-07T06:17:10.640649797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6676f4b4fb-62vxk,Uid:a747d663-7dc8-4f34-ad5a-4c321e7b0a28,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:17:10.641151 containerd[1734]: time="2025-07-07T06:17:10.641131658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hf657,Uid:22773889-0774-4541-902a-94cea1187ad6,Namespace:kube-system,Attempt:0,}" Jul 7 06:17:10.967661 systemd-networkd[1361]: cali863ce6715dc: Link UP Jul 7 06:17:10.967922 systemd-networkd[1361]: cali863ce6715dc: Gained carrier Jul 7 06:17:10.988857 containerd[1734]: 2025-07-07 06:17:10.778 [INFO][4834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-eth0 coredns-674b8bbfcf- kube-system 22773889-0774-4541-902a-94cea1187ad6 840 0 2025-07-07 06:16:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-a-ca7a3a169f coredns-674b8bbfcf-hf657 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali863ce6715dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" Namespace="kube-system" Pod="coredns-674b8bbfcf-hf657" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-" Jul 7 06:17:10.988857 containerd[1734]: 2025-07-07 06:17:10.779 [INFO][4834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" Namespace="kube-system" Pod="coredns-674b8bbfcf-hf657" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-eth0" Jul 7 06:17:10.988857 containerd[1734]: 2025-07-07 06:17:10.887 [INFO][4857] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" HandleID="k8s-pod-network.d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-eth0" Jul 7 06:17:10.989498 containerd[1734]: 2025-07-07 06:17:10.887 [INFO][4857] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" HandleID="k8s-pod-network.d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031f980), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-a-ca7a3a169f", "pod":"coredns-674b8bbfcf-hf657", "timestamp":"2025-07-07 06:17:10.887490162 +0000 UTC"}, Hostname:"ci-4372.0.1-a-ca7a3a169f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:17:10.989498 containerd[1734]: 2025-07-07 06:17:10.887 [INFO][4857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:17:10.989498 containerd[1734]: 2025-07-07 06:17:10.887 [INFO][4857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:17:10.989498 containerd[1734]: 2025-07-07 06:17:10.887 [INFO][4857] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-ca7a3a169f' Jul 7 06:17:10.989498 containerd[1734]: 2025-07-07 06:17:10.911 [INFO][4857] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:10.989498 containerd[1734]: 2025-07-07 06:17:10.922 [INFO][4857] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:10.989498 containerd[1734]: 2025-07-07 06:17:10.929 [INFO][4857] ipam/ipam.go 511: Trying affinity for 192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:10.989498 containerd[1734]: 2025-07-07 06:17:10.931 [INFO][4857] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:10.989498 containerd[1734]: 2025-07-07 06:17:10.935 [INFO][4857] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:10.989711 containerd[1734]: 2025-07-07 06:17:10.935 [INFO][4857] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:10.989711 containerd[1734]: 2025-07-07 06:17:10.938 [INFO][4857] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023 Jul 7 06:17:10.989711 containerd[1734]: 2025-07-07 06:17:10.944 [INFO][4857] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:10.989711 containerd[1734]: 2025-07-07 06:17:10.955 [INFO][4857] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.196/26] block=192.168.90.192/26 handle="k8s-pod-network.d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:10.989711 containerd[1734]: 2025-07-07 06:17:10.956 [INFO][4857] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.196/26] handle="k8s-pod-network.d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:10.989711 containerd[1734]: 2025-07-07 06:17:10.956 [INFO][4857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:17:10.989711 containerd[1734]: 2025-07-07 06:17:10.956 [INFO][4857] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.196/26] IPv6=[] ContainerID="d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" HandleID="k8s-pod-network.d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-eth0" Jul 7 06:17:10.990585 containerd[1734]: 2025-07-07 06:17:10.960 [INFO][4834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" Namespace="kube-system" Pod="coredns-674b8bbfcf-hf657" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"22773889-0774-4541-902a-94cea1187ad6", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"", Pod:"coredns-674b8bbfcf-hf657", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali863ce6715dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:10.990585 containerd[1734]: 2025-07-07 06:17:10.960 [INFO][4834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.196/32] ContainerID="d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" Namespace="kube-system" Pod="coredns-674b8bbfcf-hf657" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-eth0" Jul 7 06:17:10.990585 containerd[1734]: 2025-07-07 06:17:10.960 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali863ce6715dc ContainerID="d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" Namespace="kube-system" Pod="coredns-674b8bbfcf-hf657" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-eth0" Jul 7 06:17:10.990585 containerd[1734]: 2025-07-07 06:17:10.965 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" Namespace="kube-system" Pod="coredns-674b8bbfcf-hf657" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-eth0" Jul 7 06:17:10.990585 containerd[1734]: 2025-07-07 06:17:10.966 [INFO][4834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" Namespace="kube-system" Pod="coredns-674b8bbfcf-hf657" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"22773889-0774-4541-902a-94cea1187ad6", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023", Pod:"coredns-674b8bbfcf-hf657", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali863ce6715dc", MAC:"22:6e:3e:7e:90:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:10.990585 containerd[1734]: 2025-07-07 06:17:10.982 [INFO][4834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" Namespace="kube-system" Pod="coredns-674b8bbfcf-hf657" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--hf657-eth0" Jul 7 06:17:11.080088 containerd[1734]: time="2025-07-07T06:17:11.080022795Z" level=info msg="connecting to shim d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023" address="unix:///run/containerd/s/8d6245250ea851c706735c3a6e005fab96cecd52791980bfc484a36d379b242d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:17:11.088040 systemd-networkd[1361]: caliac57c0616ac: Gained IPv6LL Jul 7 06:17:11.141902 systemd[1]: Started cri-containerd-d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023.scope - libcontainer container d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023. Jul 7 06:17:11.152397 systemd-networkd[1361]: calif553a3e0d03: Link UP Jul 7 06:17:11.154514 systemd-networkd[1361]: calif553a3e0d03: Gained carrier Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:10.786 [INFO][4814] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-eth0 calico-apiserver-6676f4b4fb- calico-apiserver 4fea7350-6dea-4f40-b757-4ac4780b3628 849 0 2025-07-07 06:16:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6676f4b4fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-a-ca7a3a169f calico-apiserver-6676f4b4fb-w9c8j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif553a3e0d03 [] [] }} ContainerID="5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-w9c8j" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-" Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:10.786 [INFO][4814] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-w9c8j" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-eth0" Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:10.889 [INFO][4862] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" HandleID="k8s-pod-network.5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-eth0" Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:10.889 [INFO][4862] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" HandleID="k8s-pod-network.5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-a-ca7a3a169f", "pod":"calico-apiserver-6676f4b4fb-w9c8j", "timestamp":"2025-07-07 06:17:10.889029722 +0000 UTC"}, Hostname:"ci-4372.0.1-a-ca7a3a169f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:10.890 [INFO][4862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:10.956 [INFO][4862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:10.957 [INFO][4862] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-ca7a3a169f' Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:11.006 [INFO][4862] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:11.050 [INFO][4862] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:11.091 [INFO][4862] ipam/ipam.go 511: Trying affinity for 192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:11.094 [INFO][4862] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:11.099 [INFO][4862] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:11.099 [INFO][4862] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:11.103 [INFO][4862] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509 Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:11.124 [INFO][4862] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:11.136 [INFO][4862] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.197/26] block=192.168.90.192/26 handle="k8s-pod-network.5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:11.136 [INFO][4862] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.197/26] handle="k8s-pod-network.5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:11.136 [INFO][4862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:17:11.188822 containerd[1734]: 2025-07-07 06:17:11.136 [INFO][4862] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.197/26] IPv6=[] ContainerID="5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" HandleID="k8s-pod-network.5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-eth0" Jul 7 06:17:11.189342 containerd[1734]: 2025-07-07 06:17:11.147 [INFO][4814] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-w9c8j" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-eth0", GenerateName:"calico-apiserver-6676f4b4fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"4fea7350-6dea-4f40-b757-4ac4780b3628", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6676f4b4fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"", Pod:"calico-apiserver-6676f4b4fb-w9c8j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.90.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif553a3e0d03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:11.189342 containerd[1734]: 2025-07-07 06:17:11.148 [INFO][4814] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.197/32] ContainerID="5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-w9c8j" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-eth0" Jul 7 06:17:11.189342 containerd[1734]: 2025-07-07 06:17:11.148 [INFO][4814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif553a3e0d03 ContainerID="5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-w9c8j" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-eth0" Jul 7 06:17:11.189342 containerd[1734]: 2025-07-07 06:17:11.154 [INFO][4814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-w9c8j" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-eth0" Jul 7 06:17:11.189342 containerd[1734]: 2025-07-07 06:17:11.160 [INFO][4814] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-w9c8j" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-eth0", GenerateName:"calico-apiserver-6676f4b4fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"4fea7350-6dea-4f40-b757-4ac4780b3628", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6676f4b4fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509", Pod:"calico-apiserver-6676f4b4fb-w9c8j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.90.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif553a3e0d03", MAC:"02:ce:c0:b6:d0:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:11.189342 containerd[1734]: 2025-07-07 06:17:11.181 [INFO][4814] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-w9c8j" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--w9c8j-eth0" Jul 7 06:17:11.250995 containerd[1734]: time="2025-07-07T06:17:11.250869804Z" level=info msg="connecting to shim 5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509" address="unix:///run/containerd/s/85c781060564ad53db6bb532012c1ea74192238ddbbc90dc980d04222af7e34e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:17:11.286347 systemd-networkd[1361]: cali4d7329b6202: Link UP Jul 7 06:17:11.292870 systemd-networkd[1361]: cali4d7329b6202: Gained carrier Jul 7 06:17:11.321952 systemd[1]: Started cri-containerd-5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509.scope - libcontainer container 5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509. Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:10.798 [INFO][4820] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-eth0 calico-apiserver-6676f4b4fb- calico-apiserver a747d663-7dc8-4f34-ad5a-4c321e7b0a28 847 0 2025-07-07 06:16:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6676f4b4fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-a-ca7a3a169f calico-apiserver-6676f4b4fb-62vxk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4d7329b6202 [] [] }} ContainerID="5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-62vxk" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-" Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:10.798 [INFO][4820] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-62vxk" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-eth0" Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:10.896 [INFO][4866] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" HandleID="k8s-pod-network.5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-eth0" Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:10.897 [INFO][4866] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" HandleID="k8s-pod-network.5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-a-ca7a3a169f", "pod":"calico-apiserver-6676f4b4fb-62vxk", "timestamp":"2025-07-07 06:17:10.895433905 +0000 UTC"}, Hostname:"ci-4372.0.1-a-ca7a3a169f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:10.897 [INFO][4866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.137 [INFO][4866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.138 [INFO][4866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-ca7a3a169f' Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.173 [INFO][4866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.183 [INFO][4866] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.209 [INFO][4866] ipam/ipam.go 511: Trying affinity for 192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.215 [INFO][4866] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.220 [INFO][4866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.220 [INFO][4866] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.224 [INFO][4866] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.235 [INFO][4866] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.268 [INFO][4866] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.198/26] block=192.168.90.192/26 handle="k8s-pod-network.5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.268 [INFO][4866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.198/26] handle="k8s-pod-network.5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.268 [INFO][4866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:17:11.334210 containerd[1734]: 2025-07-07 06:17:11.268 [INFO][4866] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.198/26] IPv6=[] ContainerID="5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" HandleID="k8s-pod-network.5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-eth0" Jul 7 06:17:11.334771 containerd[1734]: 2025-07-07 06:17:11.273 [INFO][4820] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-62vxk" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-eth0", GenerateName:"calico-apiserver-6676f4b4fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a747d663-7dc8-4f34-ad5a-4c321e7b0a28", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6676f4b4fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"", Pod:"calico-apiserver-6676f4b4fb-62vxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.90.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d7329b6202", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:11.334771 containerd[1734]: 2025-07-07 06:17:11.274 [INFO][4820] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.198/32] ContainerID="5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-62vxk" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-eth0" Jul 7 06:17:11.334771 containerd[1734]: 2025-07-07 06:17:11.274 [INFO][4820] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d7329b6202 ContainerID="5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-62vxk" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-eth0" Jul 7 06:17:11.334771 containerd[1734]: 2025-07-07 06:17:11.288 [INFO][4820] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-62vxk" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-eth0" Jul 7 06:17:11.334771 containerd[1734]: 2025-07-07 06:17:11.289 [INFO][4820] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-62vxk" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-eth0", GenerateName:"calico-apiserver-6676f4b4fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a747d663-7dc8-4f34-ad5a-4c321e7b0a28", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6676f4b4fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f", Pod:"calico-apiserver-6676f4b4fb-62vxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.90.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d7329b6202", MAC:"16:eb:7b:c8:45:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:11.334771 containerd[1734]: 2025-07-07 06:17:11.320 [INFO][4820] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" Namespace="calico-apiserver" Pod="calico-apiserver-6676f4b4fb-62vxk" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--apiserver--6676f4b4fb--62vxk-eth0" Jul 7 06:17:11.350196 containerd[1734]: time="2025-07-07T06:17:11.350160246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hf657,Uid:22773889-0774-4541-902a-94cea1187ad6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023\"" Jul 7 06:17:11.362437 containerd[1734]: time="2025-07-07T06:17:11.362410765Z" level=info msg="CreateContainer within sandbox \"d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:17:11.403014 containerd[1734]: time="2025-07-07T06:17:11.402984216Z" level=info msg="Container 97c8256321166f7e88e22394ee6ac51afe9dc2dd0e2f061ce6dee6a655089d23: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:11.427948 containerd[1734]: time="2025-07-07T06:17:11.427906972Z" level=info msg="connecting to shim 5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f" address="unix:///run/containerd/s/12c19af02f3f7091fc54498aa062ef5ee329f3fd706b6fc76b92379d98c5e4c8" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:17:11.436793 containerd[1734]: time="2025-07-07T06:17:11.436760489Z" level=info msg="CreateContainer within sandbox \"d139488b814278a1660b020784b299e78d6dbd296c45e0778d357f5530b51023\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97c8256321166f7e88e22394ee6ac51afe9dc2dd0e2f061ce6dee6a655089d23\"" Jul 7 06:17:11.438145 containerd[1734]: time="2025-07-07T06:17:11.438074378Z" level=info msg="StartContainer for \"97c8256321166f7e88e22394ee6ac51afe9dc2dd0e2f061ce6dee6a655089d23\"" Jul 7 06:17:11.442816 containerd[1734]: time="2025-07-07T06:17:11.442771257Z" level=info msg="connecting to shim 97c8256321166f7e88e22394ee6ac51afe9dc2dd0e2f061ce6dee6a655089d23" address="unix:///run/containerd/s/8d6245250ea851c706735c3a6e005fab96cecd52791980bfc484a36d379b242d" protocol=ttrpc version=3 Jul 7 06:17:11.490954 systemd[1]: Started cri-containerd-97c8256321166f7e88e22394ee6ac51afe9dc2dd0e2f061ce6dee6a655089d23.scope - libcontainer container 97c8256321166f7e88e22394ee6ac51afe9dc2dd0e2f061ce6dee6a655089d23. Jul 7 06:17:11.500236 systemd[1]: Started cri-containerd-5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f.scope - libcontainer container 5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f. Jul 7 06:17:11.554989 containerd[1734]: time="2025-07-07T06:17:11.554907153Z" level=info msg="StartContainer for \"97c8256321166f7e88e22394ee6ac51afe9dc2dd0e2f061ce6dee6a655089d23\" returns successfully" Jul 7 06:17:11.631068 containerd[1734]: time="2025-07-07T06:17:11.631029230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6676f4b4fb-w9c8j,Uid:4fea7350-6dea-4f40-b757-4ac4780b3628,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509\"" Jul 7 06:17:11.634321 containerd[1734]: time="2025-07-07T06:17:11.634095935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nw88c,Uid:3b1aa788-9a19-4746-9123-ae8c0cbae91b,Namespace:kube-system,Attempt:0,}" Jul 7 06:17:11.635602 containerd[1734]: time="2025-07-07T06:17:11.635430583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565cbf9989-z97xc,Uid:77570737-2af5-46d1-a661-cdebb1903a96,Namespace:calico-system,Attempt:0,}" Jul 7 06:17:11.704604 containerd[1734]: time="2025-07-07T06:17:11.704338310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6676f4b4fb-62vxk,Uid:a747d663-7dc8-4f34-ad5a-4c321e7b0a28,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f\"" Jul 7 06:17:11.846229 kubelet[3113]: I0707 06:17:11.846077 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hf657" podStartSLOduration=39.845883801 podStartE2EDuration="39.845883801s" podCreationTimestamp="2025-07-07 06:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:17:11.844420078 +0000 UTC m=+45.300865802" watchObservedRunningTime="2025-07-07 06:17:11.845883801 +0000 UTC m=+45.302329523" Jul 7 06:17:12.006325 systemd-networkd[1361]: cali5920b809ba5: Link UP Jul 7 06:17:12.006547 systemd-networkd[1361]: cali5920b809ba5: Gained carrier Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.786 [INFO][5064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-eth0 coredns-674b8bbfcf- kube-system 3b1aa788-9a19-4746-9123-ae8c0cbae91b 842 0 2025-07-07 06:16:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-a-ca7a3a169f coredns-674b8bbfcf-nw88c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5920b809ba5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" Namespace="kube-system" Pod="coredns-674b8bbfcf-nw88c" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-" Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.787 [INFO][5064] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" Namespace="kube-system" Pod="coredns-674b8bbfcf-nw88c" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-eth0" Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.902 [INFO][5094] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" HandleID="k8s-pod-network.66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-eth0" Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.902 [INFO][5094] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" HandleID="k8s-pod-network.66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041a130), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-a-ca7a3a169f", "pod":"coredns-674b8bbfcf-nw88c", "timestamp":"2025-07-07 06:17:11.902558787 +0000 UTC"}, Hostname:"ci-4372.0.1-a-ca7a3a169f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.902 [INFO][5094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.905 [INFO][5094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.905 [INFO][5094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-ca7a3a169f' Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.935 [INFO][5094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.945 [INFO][5094] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.952 [INFO][5094] ipam/ipam.go 511: Trying affinity for 192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.954 [INFO][5094] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.958 [INFO][5094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.958 [INFO][5094] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.960 [INFO][5094] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53 Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.974 [INFO][5094] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.989 [INFO][5094] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.199/26] block=192.168.90.192/26 handle="k8s-pod-network.66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.989 [INFO][5094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.199/26] handle="k8s-pod-network.66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.989 [INFO][5094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:17:12.041078 containerd[1734]: 2025-07-07 06:17:11.989 [INFO][5094] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.199/26] IPv6=[] ContainerID="66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" HandleID="k8s-pod-network.66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-eth0" Jul 7 06:17:12.042402 containerd[1734]: 2025-07-07 06:17:11.993 [INFO][5064] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" Namespace="kube-system" Pod="coredns-674b8bbfcf-nw88c" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3b1aa788-9a19-4746-9123-ae8c0cbae91b", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"", Pod:"coredns-674b8bbfcf-nw88c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5920b809ba5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:12.042402 containerd[1734]: 2025-07-07 06:17:11.993 [INFO][5064] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.199/32] ContainerID="66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" Namespace="kube-system" Pod="coredns-674b8bbfcf-nw88c" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-eth0" Jul 7 06:17:12.042402 containerd[1734]: 2025-07-07 06:17:11.994 [INFO][5064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5920b809ba5 ContainerID="66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" Namespace="kube-system" Pod="coredns-674b8bbfcf-nw88c" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-eth0" Jul 7 06:17:12.042402 containerd[1734]: 2025-07-07 06:17:12.006 [INFO][5064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" Namespace="kube-system" Pod="coredns-674b8bbfcf-nw88c" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-eth0" Jul 7 06:17:12.042402 containerd[1734]: 2025-07-07 06:17:12.006 [INFO][5064] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" Namespace="kube-system" Pod="coredns-674b8bbfcf-nw88c" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3b1aa788-9a19-4746-9123-ae8c0cbae91b", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53", Pod:"coredns-674b8bbfcf-nw88c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5920b809ba5", MAC:"7e:9d:10:99:19:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:12.042402 containerd[1734]: 2025-07-07 06:17:12.032 [INFO][5064] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" Namespace="kube-system" Pod="coredns-674b8bbfcf-nw88c" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-coredns--674b8bbfcf--nw88c-eth0" Jul 7 06:17:12.114123 systemd-networkd[1361]: cali3fac1c78199: Link UP Jul 7 06:17:12.118142 systemd-networkd[1361]: cali3fac1c78199: Gained carrier Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:11.784 [INFO][5074] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-eth0 calico-kube-controllers-565cbf9989- calico-system 77570737-2af5-46d1-a661-cdebb1903a96 845 0 2025-07-07 06:16:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:565cbf9989 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.0.1-a-ca7a3a169f calico-kube-controllers-565cbf9989-z97xc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3fac1c78199 [] [] }} ContainerID="062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" Namespace="calico-system" Pod="calico-kube-controllers-565cbf9989-z97xc" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-" Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:11.785 [INFO][5074] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" Namespace="calico-system" Pod="calico-kube-controllers-565cbf9989-z97xc" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-eth0" Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:11.915 [INFO][5096] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" HandleID="k8s-pod-network.062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-eth0" Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:11.916 [INFO][5096] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" HandleID="k8s-pod-network.062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-ca7a3a169f", "pod":"calico-kube-controllers-565cbf9989-z97xc", "timestamp":"2025-07-07 06:17:11.915093254 +0000 UTC"}, Hostname:"ci-4372.0.1-a-ca7a3a169f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:11.916 [INFO][5096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:11.989 [INFO][5096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:11.989 [INFO][5096] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-ca7a3a169f' Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:12.031 [INFO][5096] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:12.051 [INFO][5096] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:12.061 [INFO][5096] ipam/ipam.go 511: Trying affinity for 192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:12.066 [INFO][5096] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:12.074 [INFO][5096] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:12.074 [INFO][5096] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:12.079 [INFO][5096] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275 Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:12.089 [INFO][5096] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:12.105 [INFO][5096] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.200/26] block=192.168.90.192/26 handle="k8s-pod-network.062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:12.105 [INFO][5096] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.200/26] handle="k8s-pod-network.062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" host="ci-4372.0.1-a-ca7a3a169f" Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:12.105 [INFO][5096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:17:12.166054 containerd[1734]: 2025-07-07 06:17:12.105 [INFO][5096] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.200/26] IPv6=[] ContainerID="062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" HandleID="k8s-pod-network.062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" Workload="ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-eth0" Jul 7 06:17:12.166950 containerd[1734]: 2025-07-07 06:17:12.108 [INFO][5074] cni-plugin/k8s.go 418: Populated endpoint ContainerID="062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" Namespace="calico-system" Pod="calico-kube-controllers-565cbf9989-z97xc" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-eth0", GenerateName:"calico-kube-controllers-565cbf9989-", Namespace:"calico-system", SelfLink:"", UID:"77570737-2af5-46d1-a661-cdebb1903a96", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"565cbf9989", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"", Pod:"calico-kube-controllers-565cbf9989-z97xc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.90.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fac1c78199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:12.166950 containerd[1734]: 2025-07-07 06:17:12.110 [INFO][5074] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.200/32] ContainerID="062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" Namespace="calico-system" Pod="calico-kube-controllers-565cbf9989-z97xc" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-eth0" Jul 7 06:17:12.166950 containerd[1734]: 2025-07-07 06:17:12.110 [INFO][5074] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fac1c78199 ContainerID="062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" Namespace="calico-system" Pod="calico-kube-controllers-565cbf9989-z97xc" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-eth0" Jul 7 06:17:12.166950 containerd[1734]: 2025-07-07 06:17:12.118 [INFO][5074] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" Namespace="calico-system" Pod="calico-kube-controllers-565cbf9989-z97xc" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-eth0" Jul 7 06:17:12.166950 containerd[1734]: 2025-07-07 06:17:12.121 [INFO][5074] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" Namespace="calico-system" Pod="calico-kube-controllers-565cbf9989-z97xc" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-eth0", GenerateName:"calico-kube-controllers-565cbf9989-", Namespace:"calico-system", SelfLink:"", UID:"77570737-2af5-46d1-a661-cdebb1903a96", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"565cbf9989", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-ca7a3a169f", ContainerID:"062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275", Pod:"calico-kube-controllers-565cbf9989-z97xc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.90.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fac1c78199", MAC:"a6:75:a3:ec:f0:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:17:12.166950 containerd[1734]: 2025-07-07 06:17:12.162 [INFO][5074] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" Namespace="calico-system" Pod="calico-kube-controllers-565cbf9989-z97xc" WorkloadEndpoint="ci--4372.0.1--a--ca7a3a169f-k8s-calico--kube--controllers--565cbf9989--z97xc-eth0" Jul 7 06:17:12.174965 systemd-networkd[1361]: cali863ce6715dc: Gained IPv6LL Jul 7 06:17:12.198191 containerd[1734]: time="2025-07-07T06:17:12.197257296Z" level=info msg="connecting to shim 66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53" address="unix:///run/containerd/s/d1ea3f5b54b76b7ca9fb95ec157aa9c057de17e37258fac93b0afde8430dfa20" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:17:12.228819 containerd[1734]: time="2025-07-07T06:17:12.228779102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:12.243009 containerd[1734]: time="2025-07-07T06:17:12.242983777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 06:17:12.244956 systemd[1]: Started cri-containerd-66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53.scope - libcontainer container 66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53. Jul 7 06:17:12.249105 containerd[1734]: time="2025-07-07T06:17:12.249078229Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:12.263836 containerd[1734]: time="2025-07-07T06:17:12.262781120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:12.267349 containerd[1734]: time="2025-07-07T06:17:12.267306397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.389796925s" Jul 7 06:17:12.267538 containerd[1734]: time="2025-07-07T06:17:12.267348180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 06:17:12.272295 containerd[1734]: time="2025-07-07T06:17:12.272263354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:17:12.272946 containerd[1734]: time="2025-07-07T06:17:12.272479911Z" level=info msg="connecting to shim 062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275" address="unix:///run/containerd/s/6b0a3352b1ee4aaeccca4e318429cb29a780e47d7bd1058626c00d5ba2e5e246" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:17:12.282121 containerd[1734]: time="2025-07-07T06:17:12.281583589Z" level=info msg="CreateContainer within sandbox \"97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:17:12.333548 containerd[1734]: time="2025-07-07T06:17:12.333510831Z" level=info msg="Container 56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:12.333937 systemd[1]: Started cri-containerd-062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275.scope - libcontainer container 062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275. Jul 7 06:17:12.366976 systemd-networkd[1361]: calif553a3e0d03: Gained IPv6LL Jul 7 06:17:12.369282 containerd[1734]: time="2025-07-07T06:17:12.369256387Z" level=info msg="CreateContainer within sandbox \"97fbb0393ac86fa4815632bb8f85471313320b98b278d39ebf410eda4b99c6e0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc\"" Jul 7 06:17:12.371629 containerd[1734]: time="2025-07-07T06:17:12.371407486Z" level=info msg="StartContainer for \"56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc\"" Jul 7 06:17:12.373248 containerd[1734]: time="2025-07-07T06:17:12.373209562Z" level=info msg="connecting to shim 56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc" address="unix:///run/containerd/s/5bfc36fc5a47f37b18e14b96cec3686801ec53469f011973ceddf962f45d7dc4" protocol=ttrpc version=3 Jul 7 06:17:12.395438 containerd[1734]: time="2025-07-07T06:17:12.395414200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nw88c,Uid:3b1aa788-9a19-4746-9123-ae8c0cbae91b,Namespace:kube-system,Attempt:0,} returns sandbox id \"66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53\"" Jul 7 06:17:12.409616 containerd[1734]: time="2025-07-07T06:17:12.409596729Z" level=info msg="CreateContainer within sandbox \"66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:17:12.412121 systemd[1]: Started cri-containerd-56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc.scope - libcontainer container 56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc. Jul 7 06:17:12.428743 containerd[1734]: time="2025-07-07T06:17:12.428263372Z" level=info msg="Container c08f260bb9ba1f30e26e71c01f622873bd497e5b5b4f77e9663878d9d6bbec91: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:12.443717 containerd[1734]: time="2025-07-07T06:17:12.443696608Z" level=info msg="CreateContainer within sandbox \"66b9100190cfc2337d42fe1eb9c6374f1e99b1581a3c3567a4afa9ed751bcb53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c08f260bb9ba1f30e26e71c01f622873bd497e5b5b4f77e9663878d9d6bbec91\"" Jul 7 06:17:12.444420 containerd[1734]: time="2025-07-07T06:17:12.444400837Z" level=info msg="StartContainer for \"c08f260bb9ba1f30e26e71c01f622873bd497e5b5b4f77e9663878d9d6bbec91\"" Jul 7 06:17:12.446456 containerd[1734]: time="2025-07-07T06:17:12.446432348Z" level=info msg="connecting to shim c08f260bb9ba1f30e26e71c01f622873bd497e5b5b4f77e9663878d9d6bbec91" address="unix:///run/containerd/s/d1ea3f5b54b76b7ca9fb95ec157aa9c057de17e37258fac93b0afde8430dfa20" protocol=ttrpc version=3 Jul 7 06:17:12.472992 systemd[1]: Started cri-containerd-c08f260bb9ba1f30e26e71c01f622873bd497e5b5b4f77e9663878d9d6bbec91.scope - libcontainer container c08f260bb9ba1f30e26e71c01f622873bd497e5b5b4f77e9663878d9d6bbec91. Jul 7 06:17:12.526131 containerd[1734]: time="2025-07-07T06:17:12.526111310Z" level=info msg="StartContainer for \"c08f260bb9ba1f30e26e71c01f622873bd497e5b5b4f77e9663878d9d6bbec91\" returns successfully" Jul 7 06:17:12.630134 containerd[1734]: time="2025-07-07T06:17:12.630058791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565cbf9989-z97xc,Uid:77570737-2af5-46d1-a661-cdebb1903a96,Namespace:calico-system,Attempt:0,} returns sandbox id \"062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275\"" Jul 7 06:17:12.654603 containerd[1734]: time="2025-07-07T06:17:12.654582718Z" level=info msg="StartContainer for \"56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc\" returns successfully" Jul 7 06:17:12.845877 kubelet[3113]: I0707 06:17:12.845272 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-xjf89" podStartSLOduration=24.451663992 podStartE2EDuration="27.845251639s" podCreationTimestamp="2025-07-07 06:16:45 +0000 UTC" firstStartedPulling="2025-07-07 06:17:08.877235003 +0000 UTC m=+42.333680716" lastFinishedPulling="2025-07-07 06:17:12.270822645 +0000 UTC m=+45.727268363" observedRunningTime="2025-07-07 06:17:12.843480118 +0000 UTC m=+46.299925836" watchObservedRunningTime="2025-07-07 06:17:12.845251639 +0000 UTC m=+46.301697358" Jul 7 06:17:12.955309 containerd[1734]: time="2025-07-07T06:17:12.954608146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc\" id:\"1989534a969b9357f877293b7570c8b317a31b3c60626e2be444925093c1eb56\" pid:5307 exit_status:1 exited_at:{seconds:1751869032 nanos:954102407}" Jul 7 06:17:13.008345 systemd-networkd[1361]: cali4d7329b6202: Gained IPv6LL Jul 7 06:17:13.263025 systemd-networkd[1361]: cali5920b809ba5: Gained IPv6LL Jul 7 06:17:13.712084 systemd-networkd[1361]: cali3fac1c78199: Gained IPv6LL Jul 7 06:17:14.012285 containerd[1734]: time="2025-07-07T06:17:14.012047330Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc\" id:\"245eb4178535fcd2a6a1e6fb2722308378b49259337b42ab64020ff88f3656ea\" pid:5346 exit_status:1 exited_at:{seconds:1751869034 nanos:11637641}" Jul 7 06:17:14.017249 containerd[1734]: time="2025-07-07T06:17:14.017164315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:14.021311 containerd[1734]: time="2025-07-07T06:17:14.021279932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 06:17:14.026943 containerd[1734]: time="2025-07-07T06:17:14.026878185Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:14.032671 containerd[1734]: time="2025-07-07T06:17:14.032274539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:14.032843 containerd[1734]: time="2025-07-07T06:17:14.032658332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.759798461s" Jul 7 06:17:14.032912 containerd[1734]: time="2025-07-07T06:17:14.032902434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 06:17:14.035149 containerd[1734]: time="2025-07-07T06:17:14.035119784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:17:14.041825 containerd[1734]: time="2025-07-07T06:17:14.041282546Z" level=info msg="CreateContainer within sandbox \"76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:17:14.080089 containerd[1734]: time="2025-07-07T06:17:14.080061867Z" level=info msg="Container 859a3a06c30ed4903c73be8840980ffe4f7a0d1edc71b2b378043135c357ed47: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:14.080440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount564411282.mount: Deactivated successfully. Jul 7 06:17:14.110647 containerd[1734]: time="2025-07-07T06:17:14.110616281Z" level=info msg="CreateContainer within sandbox \"76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"859a3a06c30ed4903c73be8840980ffe4f7a0d1edc71b2b378043135c357ed47\"" Jul 7 06:17:14.111905 containerd[1734]: time="2025-07-07T06:17:14.111213048Z" level=info msg="StartContainer for \"859a3a06c30ed4903c73be8840980ffe4f7a0d1edc71b2b378043135c357ed47\"" Jul 7 06:17:14.112694 containerd[1734]: time="2025-07-07T06:17:14.112663044Z" level=info msg="connecting to shim 859a3a06c30ed4903c73be8840980ffe4f7a0d1edc71b2b378043135c357ed47" address="unix:///run/containerd/s/f170e447c98e0533fc8c2f2289f960216d24088ee0343d5c6b0889b9fe3ee8dd" protocol=ttrpc version=3 Jul 7 06:17:14.135972 systemd[1]: Started cri-containerd-859a3a06c30ed4903c73be8840980ffe4f7a0d1edc71b2b378043135c357ed47.scope - libcontainer container 859a3a06c30ed4903c73be8840980ffe4f7a0d1edc71b2b378043135c357ed47. Jul 7 06:17:14.192535 containerd[1734]: time="2025-07-07T06:17:14.192512374Z" level=info msg="StartContainer for \"859a3a06c30ed4903c73be8840980ffe4f7a0d1edc71b2b378043135c357ed47\" returns successfully" Jul 7 06:17:15.002364 containerd[1734]: time="2025-07-07T06:17:15.002308618Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc\" id:\"f0d144be0130e99c4fe2f7fd071ab60b60282f3427333b4ca824999bd92ffd1f\" pid:5397 exit_status:1 exited_at:{seconds:1751869035 nanos:1679459}" Jul 7 06:17:18.058439 containerd[1734]: time="2025-07-07T06:17:18.058378976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:18.200732 containerd[1734]: time="2025-07-07T06:17:18.200657501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 06:17:18.457212 containerd[1734]: time="2025-07-07T06:17:18.457122872Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:18.541726 containerd[1734]: time="2025-07-07T06:17:18.541668565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:18.542457 containerd[1734]: time="2025-07-07T06:17:18.542395625Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.507233355s" Jul 7 06:17:18.542457 containerd[1734]: time="2025-07-07T06:17:18.542433997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:17:18.544310 containerd[1734]: time="2025-07-07T06:17:18.544268780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:17:18.551450 containerd[1734]: time="2025-07-07T06:17:18.551418110Z" level=info msg="CreateContainer within sandbox \"5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:17:18.577679 containerd[1734]: time="2025-07-07T06:17:18.577602281Z" level=info msg="Container 6107f1786355ef737390e0bfb13c18f31f54536cc99f8fa95802d1cc194e2f0f: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:18.600952 containerd[1734]: time="2025-07-07T06:17:18.600924284Z" level=info msg="CreateContainer within sandbox \"5472c911bb5f91665f067a2a4307d2b3f63061507c8a18351c9422b293fe2509\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6107f1786355ef737390e0bfb13c18f31f54536cc99f8fa95802d1cc194e2f0f\"" Jul 7 06:17:18.601475 containerd[1734]: time="2025-07-07T06:17:18.601393693Z" level=info msg="StartContainer for \"6107f1786355ef737390e0bfb13c18f31f54536cc99f8fa95802d1cc194e2f0f\"" Jul 7 06:17:18.602771 containerd[1734]: time="2025-07-07T06:17:18.602712711Z" level=info msg="connecting to shim 6107f1786355ef737390e0bfb13c18f31f54536cc99f8fa95802d1cc194e2f0f" address="unix:///run/containerd/s/85c781060564ad53db6bb532012c1ea74192238ddbbc90dc980d04222af7e34e" protocol=ttrpc version=3 Jul 7 06:17:18.625037 systemd[1]: Started cri-containerd-6107f1786355ef737390e0bfb13c18f31f54536cc99f8fa95802d1cc194e2f0f.scope - libcontainer container 6107f1786355ef737390e0bfb13c18f31f54536cc99f8fa95802d1cc194e2f0f. Jul 7 06:17:18.668772 containerd[1734]: time="2025-07-07T06:17:18.668607096Z" level=info msg="StartContainer for \"6107f1786355ef737390e0bfb13c18f31f54536cc99f8fa95802d1cc194e2f0f\" returns successfully" Jul 7 06:17:18.881287 kubelet[3113]: I0707 06:17:18.880225 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nw88c" podStartSLOduration=46.880204257 podStartE2EDuration="46.880204257s" podCreationTimestamp="2025-07-07 06:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:17:12.87239011 +0000 UTC m=+46.328835826" watchObservedRunningTime="2025-07-07 06:17:18.880204257 +0000 UTC m=+52.336649975" Jul 7 06:17:18.926216 containerd[1734]: time="2025-07-07T06:17:18.926183838Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:18.929429 containerd[1734]: time="2025-07-07T06:17:18.929404400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:17:18.931684 containerd[1734]: time="2025-07-07T06:17:18.931656534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 387.352685ms" Jul 7 06:17:18.931756 containerd[1734]: time="2025-07-07T06:17:18.931692256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:17:18.933873 containerd[1734]: time="2025-07-07T06:17:18.932497010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:17:18.938566 containerd[1734]: time="2025-07-07T06:17:18.938370103Z" level=info msg="CreateContainer within sandbox \"5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:17:18.967640 containerd[1734]: time="2025-07-07T06:17:18.967616093Z" level=info msg="Container df6adc45b66fa44cccb711a5644a30492b13fe11f9c4ce6be50a098217e5e3d3: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:18.993434 containerd[1734]: time="2025-07-07T06:17:18.993413754Z" level=info msg="CreateContainer within sandbox \"5374d3647b0bc25cc832f1ebcdcdc3737648d15066ff1000c112025735bd178f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"df6adc45b66fa44cccb711a5644a30492b13fe11f9c4ce6be50a098217e5e3d3\"" Jul 7 06:17:18.994830 containerd[1734]: time="2025-07-07T06:17:18.993950588Z" level=info msg="StartContainer for \"df6adc45b66fa44cccb711a5644a30492b13fe11f9c4ce6be50a098217e5e3d3\"" Jul 7 06:17:18.995100 containerd[1734]: time="2025-07-07T06:17:18.995081512Z" level=info msg="connecting to shim df6adc45b66fa44cccb711a5644a30492b13fe11f9c4ce6be50a098217e5e3d3" address="unix:///run/containerd/s/12c19af02f3f7091fc54498aa062ef5ee329f3fd706b6fc76b92379d98c5e4c8" protocol=ttrpc version=3 Jul 7 06:17:19.018398 systemd[1]: Started cri-containerd-df6adc45b66fa44cccb711a5644a30492b13fe11f9c4ce6be50a098217e5e3d3.scope - libcontainer container df6adc45b66fa44cccb711a5644a30492b13fe11f9c4ce6be50a098217e5e3d3. Jul 7 06:17:19.116704 containerd[1734]: time="2025-07-07T06:17:19.116679252Z" level=info msg="StartContainer for \"df6adc45b66fa44cccb711a5644a30492b13fe11f9c4ce6be50a098217e5e3d3\" returns successfully" Jul 7 06:17:19.897252 kubelet[3113]: I0707 06:17:19.897168 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6676f4b4fb-62vxk" podStartSLOduration=30.672087658 podStartE2EDuration="37.897050155s" podCreationTimestamp="2025-07-07 06:16:42 +0000 UTC" firstStartedPulling="2025-07-07 06:17:11.707365455 +0000 UTC m=+45.163811174" lastFinishedPulling="2025-07-07 06:17:18.93232795 +0000 UTC m=+52.388773671" observedRunningTime="2025-07-07 06:17:19.896328713 +0000 UTC m=+53.352774437" watchObservedRunningTime="2025-07-07 06:17:19.897050155 +0000 UTC m=+53.353495869" Jul 7 06:17:19.897792 kubelet[3113]: I0707 06:17:19.897296 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6676f4b4fb-w9c8j" podStartSLOduration=31.986794046 podStartE2EDuration="38.897289338s" podCreationTimestamp="2025-07-07 06:16:41 +0000 UTC" firstStartedPulling="2025-07-07 06:17:11.633538049 +0000 UTC m=+45.089983758" lastFinishedPulling="2025-07-07 06:17:18.544033336 +0000 UTC m=+52.000479050" observedRunningTime="2025-07-07 06:17:18.882418186 +0000 UTC m=+52.338863907" watchObservedRunningTime="2025-07-07 06:17:19.897289338 +0000 UTC m=+53.353735052" Jul 7 06:17:20.864262 kubelet[3113]: I0707 06:17:20.864052 3113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:17:22.350452 containerd[1734]: time="2025-07-07T06:17:22.350378616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:22.354177 containerd[1734]: time="2025-07-07T06:17:22.354132210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 06:17:22.357309 containerd[1734]: time="2025-07-07T06:17:22.357259753Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:22.364036 containerd[1734]: time="2025-07-07T06:17:22.363966849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:22.365372 containerd[1734]: time="2025-07-07T06:17:22.365289994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.431307871s" Jul 7 06:17:22.365372 containerd[1734]: time="2025-07-07T06:17:22.365333008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 06:17:22.367528 containerd[1734]: time="2025-07-07T06:17:22.367460074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:17:22.389657 containerd[1734]: time="2025-07-07T06:17:22.389612528Z" level=info msg="CreateContainer within sandbox \"062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:17:22.414979 containerd[1734]: time="2025-07-07T06:17:22.414941794Z" level=info msg="Container 2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:22.439907 containerd[1734]: time="2025-07-07T06:17:22.439883014Z" level=info msg="CreateContainer within sandbox \"062f9590e08caad2c91b69fb46ce9092439db060ab7f5488568548a21a011275\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0\"" Jul 7 06:17:22.440397 containerd[1734]: time="2025-07-07T06:17:22.440314906Z" level=info msg="StartContainer for \"2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0\"" Jul 7 06:17:22.441651 containerd[1734]: time="2025-07-07T06:17:22.441624224Z" level=info msg="connecting to shim 2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0" address="unix:///run/containerd/s/6b0a3352b1ee4aaeccca4e318429cb29a780e47d7bd1058626c00d5ba2e5e246" protocol=ttrpc version=3 Jul 7 06:17:22.461990 systemd[1]: Started cri-containerd-2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0.scope - libcontainer container 2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0. Jul 7 06:17:22.536634 containerd[1734]: time="2025-07-07T06:17:22.536602576Z" level=info msg="StartContainer for \"2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0\" returns successfully" Jul 7 06:17:22.941902 containerd[1734]: time="2025-07-07T06:17:22.941773381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0\" id:\"26487101aaf45ab4e87ad61611b9341215bf4da701614b59e806698eba578d8e\" pid:5556 exited_at:{seconds:1751869042 nanos:941309326}" Jul 7 06:17:22.959676 kubelet[3113]: I0707 06:17:22.959444 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-565cbf9989-z97xc" podStartSLOduration=28.224592667 podStartE2EDuration="37.959422967s" podCreationTimestamp="2025-07-07 06:16:45 +0000 UTC" firstStartedPulling="2025-07-07 06:17:12.632120034 +0000 UTC m=+46.088565747" lastFinishedPulling="2025-07-07 06:17:22.366950324 +0000 UTC m=+55.823396047" observedRunningTime="2025-07-07 06:17:22.902556962 +0000 UTC m=+56.359002676" watchObservedRunningTime="2025-07-07 06:17:22.959422967 +0000 UTC m=+56.415868682" Jul 7 06:17:23.875131 kubelet[3113]: I0707 06:17:23.875083 3113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:17:24.232991 containerd[1734]: time="2025-07-07T06:17:24.232158770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:24.241425 containerd[1734]: time="2025-07-07T06:17:24.241380840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 06:17:24.253426 containerd[1734]: time="2025-07-07T06:17:24.253265621Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:24.258766 containerd[1734]: time="2025-07-07T06:17:24.258335277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:17:24.259068 containerd[1734]: time="2025-07-07T06:17:24.258751657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.891256591s" Jul 7 06:17:24.259152 containerd[1734]: time="2025-07-07T06:17:24.259140373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 06:17:24.269852 containerd[1734]: time="2025-07-07T06:17:24.269769653Z" level=info msg="CreateContainer within sandbox \"76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:17:24.306943 containerd[1734]: time="2025-07-07T06:17:24.306910654Z" level=info msg="Container c4875d1b8e0d1d5ebb9b061c97b775d1dd1fbb317e4aa5e6ec45ba5c0bfc5545: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:24.337261 containerd[1734]: time="2025-07-07T06:17:24.337208283Z" level=info msg="CreateContainer within sandbox \"76eab7a5431cd3e3b53f4f58b0b67980dc60a836ea3971fd8a85bfa75d3e2c8c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c4875d1b8e0d1d5ebb9b061c97b775d1dd1fbb317e4aa5e6ec45ba5c0bfc5545\"" Jul 7 06:17:24.339832 containerd[1734]: time="2025-07-07T06:17:24.338933304Z" level=info msg="StartContainer for \"c4875d1b8e0d1d5ebb9b061c97b775d1dd1fbb317e4aa5e6ec45ba5c0bfc5545\"" Jul 7 06:17:24.340670 containerd[1734]: time="2025-07-07T06:17:24.340645564Z" level=info msg="connecting to shim c4875d1b8e0d1d5ebb9b061c97b775d1dd1fbb317e4aa5e6ec45ba5c0bfc5545" address="unix:///run/containerd/s/f170e447c98e0533fc8c2f2289f960216d24088ee0343d5c6b0889b9fe3ee8dd" protocol=ttrpc version=3 Jul 7 06:17:24.376938 systemd[1]: Started cri-containerd-c4875d1b8e0d1d5ebb9b061c97b775d1dd1fbb317e4aa5e6ec45ba5c0bfc5545.scope - libcontainer container c4875d1b8e0d1d5ebb9b061c97b775d1dd1fbb317e4aa5e6ec45ba5c0bfc5545. Jul 7 06:17:24.559876 containerd[1734]: time="2025-07-07T06:17:24.559754659Z" level=info msg="StartContainer for \"c4875d1b8e0d1d5ebb9b061c97b775d1dd1fbb317e4aa5e6ec45ba5c0bfc5545\" returns successfully" Jul 7 06:17:24.746778 kubelet[3113]: I0707 06:17:24.746748 3113 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:17:24.747406 kubelet[3113]: I0707 06:17:24.746815 3113 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:17:24.914145 kubelet[3113]: I0707 06:17:24.914079 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pv69l" podStartSLOduration=25.59338958 podStartE2EDuration="39.914055151s" podCreationTimestamp="2025-07-07 06:16:45 +0000 UTC" firstStartedPulling="2025-07-07 06:17:09.940035007 +0000 UTC m=+43.396480717" lastFinishedPulling="2025-07-07 06:17:24.260700569 +0000 UTC m=+57.717146288" observedRunningTime="2025-07-07 06:17:24.91362076 +0000 UTC m=+58.370066477" watchObservedRunningTime="2025-07-07 06:17:24.914055151 +0000 UTC m=+58.370500869" Jul 7 06:17:37.302634 containerd[1734]: time="2025-07-07T06:17:37.302574817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"734a0c07f1d95e65f885909861cb3671b22c34c1b100a394cdae81d8bb0d079c\" id:\"bf00f2c11bf3669feabd7b4fdfd37ba5e40d275be66e0cb7e3e3f2606ded17fe\" pid:5633 exited_at:{seconds:1751869057 nanos:302280321}" Jul 7 06:17:44.524686 containerd[1734]: time="2025-07-07T06:17:44.524622076Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc\" id:\"fad64e1101eb5041512e902af693ecd78a9a30ec270eb24da5fe15ff9cfc123b\" pid:5661 exited_at:{seconds:1751869064 nanos:522941012}" Jul 7 06:17:44.909226 containerd[1734]: time="2025-07-07T06:17:44.909162717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc\" id:\"14eb43f73458d2cf5bb227838743bd0a3b1a4de3e01925811b20a4c54ebde102\" pid:5683 exited_at:{seconds:1751869064 nanos:908919476}" Jul 7 06:17:52.912151 containerd[1734]: time="2025-07-07T06:17:52.912073069Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0\" id:\"eb8722a352fe8353d26aaf130f03f78ac361c6225f8addc46d02c34e58de5b56\" pid:5711 exited_at:{seconds:1751869072 nanos:911702981}" Jul 7 06:18:07.298663 containerd[1734]: time="2025-07-07T06:18:07.298596476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"734a0c07f1d95e65f885909861cb3671b22c34c1b100a394cdae81d8bb0d079c\" id:\"e88211b75033df9dc3f1b0702eb7b4e51dee12d3f1ab470e8c36f520273889d7\" pid:5734 exited_at:{seconds:1751869087 nanos:298206677}" Jul 7 06:18:14.913470 containerd[1734]: time="2025-07-07T06:18:14.913286100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc\" id:\"7ac35cbc7a53160de2da83641b11385a8cff20b23fafaccbc52cdbed90ec6da8\" pid:5758 exited_at:{seconds:1751869094 nanos:912885096}" Jul 7 06:18:20.308580 systemd[1]: Started sshd@7-10.200.4.8:22-10.200.16.10:51672.service - OpenSSH per-connection server daemon (10.200.16.10:51672). Jul 7 06:18:20.903644 sshd[5772]: Accepted publickey for core from 10.200.16.10 port 51672 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:18:20.904997 sshd-session[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:20.910205 systemd-logind[1706]: New session 10 of user core. Jul 7 06:18:20.913981 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:18:21.400663 sshd[5774]: Connection closed by 10.200.16.10 port 51672 Jul 7 06:18:21.401263 sshd-session[5772]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:21.403769 systemd[1]: sshd@7-10.200.4.8:22-10.200.16.10:51672.service: Deactivated successfully. Jul 7 06:18:21.406985 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:18:21.409404 systemd-logind[1706]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:18:21.411034 systemd-logind[1706]: Removed session 10. Jul 7 06:18:22.913222 containerd[1734]: time="2025-07-07T06:18:22.913166748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0\" id:\"acafe2f5033cd44e59445ad0bb2228b33f26b223af45bae0eb6e1cae20b61362\" pid:5799 exited_at:{seconds:1751869102 nanos:912886138}" Jul 7 06:18:26.514098 systemd[1]: Started sshd@8-10.200.4.8:22-10.200.16.10:51684.service - OpenSSH per-connection server daemon (10.200.16.10:51684). Jul 7 06:18:27.138646 sshd[5815]: Accepted publickey for core from 10.200.16.10 port 51684 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:18:27.139248 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:27.147294 systemd-logind[1706]: New session 11 of user core. Jul 7 06:18:27.156568 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:18:27.623820 sshd[5819]: Connection closed by 10.200.16.10 port 51684 Jul 7 06:18:27.624478 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:27.629993 systemd[1]: sshd@8-10.200.4.8:22-10.200.16.10:51684.service: Deactivated successfully. Jul 7 06:18:27.632171 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:18:27.633047 systemd-logind[1706]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:18:27.634596 systemd-logind[1706]: Removed session 11. Jul 7 06:18:31.811207 containerd[1734]: time="2025-07-07T06:18:31.811144507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0\" id:\"220833cfbe7940ba4cece6443f49dacd135189c58c6ab029f237524901cf9c71\" pid:5844 exited_at:{seconds:1751869111 nanos:810831452}" Jul 7 06:18:32.732066 systemd[1]: Started sshd@9-10.200.4.8:22-10.200.16.10:48012.service - OpenSSH per-connection server daemon (10.200.16.10:48012). Jul 7 06:18:33.327474 sshd[5857]: Accepted publickey for core from 10.200.16.10 port 48012 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:18:33.328850 sshd-session[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:33.333954 systemd-logind[1706]: New session 12 of user core. Jul 7 06:18:33.338983 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:18:33.795812 sshd[5861]: Connection closed by 10.200.16.10 port 48012 Jul 7 06:18:33.796361 sshd-session[5857]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:33.799730 systemd[1]: sshd@9-10.200.4.8:22-10.200.16.10:48012.service: Deactivated successfully. Jul 7 06:18:33.801745 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:18:33.802657 systemd-logind[1706]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:18:33.804410 systemd-logind[1706]: Removed session 12. Jul 7 06:18:37.301415 containerd[1734]: time="2025-07-07T06:18:37.301342073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"734a0c07f1d95e65f885909861cb3671b22c34c1b100a394cdae81d8bb0d079c\" id:\"a36a0a7dd425c3afcaa5da34d5e8f27f8068231b3cb1cf5c69fecbd8472c43a2\" pid:5886 exited_at:{seconds:1751869117 nanos:301063983}" Jul 7 06:18:38.904366 systemd[1]: Started sshd@10-10.200.4.8:22-10.200.16.10:48014.service - OpenSSH per-connection server daemon (10.200.16.10:48014). Jul 7 06:18:39.492998 sshd[5899]: Accepted publickey for core from 10.200.16.10 port 48014 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:18:39.494475 sshd-session[5899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:39.499896 systemd-logind[1706]: New session 13 of user core. Jul 7 06:18:39.506980 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:18:39.967728 sshd[5915]: Connection closed by 10.200.16.10 port 48014 Jul 7 06:18:39.968270 sshd-session[5899]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:39.971320 systemd[1]: sshd@10-10.200.4.8:22-10.200.16.10:48014.service: Deactivated successfully. Jul 7 06:18:39.973557 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:18:39.976580 systemd-logind[1706]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:18:39.977648 systemd-logind[1706]: Removed session 13. Jul 7 06:18:40.070750 systemd[1]: Started sshd@11-10.200.4.8:22-10.200.16.10:56318.service - OpenSSH per-connection server daemon (10.200.16.10:56318). Jul 7 06:18:40.658287 sshd[5928]: Accepted publickey for core from 10.200.16.10 port 56318 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:18:40.659526 sshd-session[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:40.664539 systemd-logind[1706]: New session 14 of user core. Jul 7 06:18:40.669012 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:18:41.159471 sshd[5937]: Connection closed by 10.200.16.10 port 56318 Jul 7 06:18:41.160113 sshd-session[5928]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:41.164142 systemd[1]: sshd@11-10.200.4.8:22-10.200.16.10:56318.service: Deactivated successfully. Jul 7 06:18:41.166371 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:18:41.167614 systemd-logind[1706]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:18:41.168897 systemd-logind[1706]: Removed session 14. Jul 7 06:18:41.267038 systemd[1]: Started sshd@12-10.200.4.8:22-10.200.16.10:56322.service - OpenSSH per-connection server daemon (10.200.16.10:56322). Jul 7 06:18:41.867579 sshd[5947]: Accepted publickey for core from 10.200.16.10 port 56322 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:18:41.868905 sshd-session[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:41.873884 systemd-logind[1706]: New session 15 of user core. Jul 7 06:18:41.879961 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:18:42.361108 sshd[5949]: Connection closed by 10.200.16.10 port 56322 Jul 7 06:18:42.361638 sshd-session[5947]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:42.365221 systemd[1]: sshd@12-10.200.4.8:22-10.200.16.10:56322.service: Deactivated successfully. Jul 7 06:18:42.367473 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:18:42.368284 systemd-logind[1706]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:18:42.369687 systemd-logind[1706]: Removed session 15. Jul 7 06:18:44.493486 containerd[1734]: time="2025-07-07T06:18:44.493394413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc\" id:\"46e3be1f8f4d486cb5546fce4d399374a015b1e648790069e9d8b0a5c9ff41f7\" pid:5973 exited_at:{seconds:1751869124 nanos:493010528}" Jul 7 06:18:44.905399 containerd[1734]: time="2025-07-07T06:18:44.905341174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc\" id:\"c3464586958e13a6f218b097035d66838c0b29d5be8ab185710bf5f05c5c6188\" pid:5995 exited_at:{seconds:1751869124 nanos:904932754}" Jul 7 06:18:47.470902 systemd[1]: Started sshd@13-10.200.4.8:22-10.200.16.10:56330.service - OpenSSH per-connection server daemon (10.200.16.10:56330). Jul 7 06:18:48.062961 sshd[6010]: Accepted publickey for core from 10.200.16.10 port 56330 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:18:48.065919 sshd-session[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:48.071216 systemd-logind[1706]: New session 16 of user core. Jul 7 06:18:48.075981 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:18:48.535614 sshd[6012]: Connection closed by 10.200.16.10 port 56330 Jul 7 06:18:48.536127 sshd-session[6010]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:48.538832 systemd[1]: sshd@13-10.200.4.8:22-10.200.16.10:56330.service: Deactivated successfully. Jul 7 06:18:48.541085 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:18:48.542929 systemd-logind[1706]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:18:48.543724 systemd-logind[1706]: Removed session 16. Jul 7 06:18:52.913478 containerd[1734]: time="2025-07-07T06:18:52.913419739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0\" id:\"cb23312e2dc56f3cf0aeba07fa2b58513fda7ea0797db7f6422030fb4932b1d3\" pid:6035 exited_at:{seconds:1751869132 nanos:913025484}" Jul 7 06:18:53.650735 systemd[1]: Started sshd@14-10.200.4.8:22-10.200.16.10:34154.service - OpenSSH per-connection server daemon (10.200.16.10:34154). Jul 7 06:18:54.237346 sshd[6045]: Accepted publickey for core from 10.200.16.10 port 34154 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:18:54.238546 sshd-session[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:54.242888 systemd-logind[1706]: New session 17 of user core. Jul 7 06:18:54.248967 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:18:54.710941 sshd[6047]: Connection closed by 10.200.16.10 port 34154 Jul 7 06:18:54.711487 sshd-session[6045]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:54.715119 systemd[1]: sshd@14-10.200.4.8:22-10.200.16.10:34154.service: Deactivated successfully. Jul 7 06:18:54.717451 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:18:54.718463 systemd-logind[1706]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:18:54.719732 systemd-logind[1706]: Removed session 17. Jul 7 06:18:59.818683 systemd[1]: Started sshd@15-10.200.4.8:22-10.200.16.10:35100.service - OpenSSH per-connection server daemon (10.200.16.10:35100). Jul 7 06:19:00.412010 sshd[6059]: Accepted publickey for core from 10.200.16.10 port 35100 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:19:00.413367 sshd-session[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:00.418491 systemd-logind[1706]: New session 18 of user core. Jul 7 06:19:00.423998 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:19:00.881934 sshd[6061]: Connection closed by 10.200.16.10 port 35100 Jul 7 06:19:00.882456 sshd-session[6059]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:00.886095 systemd[1]: sshd@15-10.200.4.8:22-10.200.16.10:35100.service: Deactivated successfully. Jul 7 06:19:00.888303 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:19:00.889451 systemd-logind[1706]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:19:00.890669 systemd-logind[1706]: Removed session 18. Jul 7 06:19:00.991911 systemd[1]: Started sshd@16-10.200.4.8:22-10.200.16.10:35104.service - OpenSSH per-connection server daemon (10.200.16.10:35104). Jul 7 06:19:01.581437 sshd[6073]: Accepted publickey for core from 10.200.16.10 port 35104 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:19:01.582624 sshd-session[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:01.587522 systemd-logind[1706]: New session 19 of user core. Jul 7 06:19:01.591980 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:19:02.086999 sshd[6075]: Connection closed by 10.200.16.10 port 35104 Jul 7 06:19:02.087477 sshd-session[6073]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:02.090336 systemd[1]: sshd@16-10.200.4.8:22-10.200.16.10:35104.service: Deactivated successfully. Jul 7 06:19:02.092735 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:19:02.094203 systemd-logind[1706]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:19:02.096122 systemd-logind[1706]: Removed session 19. Jul 7 06:19:02.201104 systemd[1]: Started sshd@17-10.200.4.8:22-10.200.16.10:35116.service - OpenSSH per-connection server daemon (10.200.16.10:35116). Jul 7 06:19:02.786555 sshd[6085]: Accepted publickey for core from 10.200.16.10 port 35116 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:19:02.787602 sshd-session[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:02.791846 systemd-logind[1706]: New session 20 of user core. Jul 7 06:19:02.794945 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:19:03.884437 update_engine[1709]: I20250707 06:19:03.884371 1709 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 06:19:03.884437 update_engine[1709]: I20250707 06:19:03.884428 1709 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 06:19:03.885037 update_engine[1709]: I20250707 06:19:03.884608 1709 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 06:19:03.885037 update_engine[1709]: I20250707 06:19:03.885006 1709 omaha_request_params.cc:62] Current group set to alpha Jul 7 06:19:03.885656 update_engine[1709]: I20250707 06:19:03.885162 1709 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 06:19:03.885656 update_engine[1709]: I20250707 06:19:03.885178 1709 update_attempter.cc:643] Scheduling an action processor start. Jul 7 06:19:03.885656 update_engine[1709]: I20250707 06:19:03.885200 1709 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 06:19:03.885656 update_engine[1709]: I20250707 06:19:03.885231 1709 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 06:19:03.885656 update_engine[1709]: I20250707 06:19:03.885288 1709 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 06:19:03.885656 update_engine[1709]: I20250707 06:19:03.885293 1709 omaha_request_action.cc:272] Request: Jul 7 06:19:03.885656 update_engine[1709]: Jul 7 06:19:03.885656 update_engine[1709]: Jul 7 06:19:03.885656 update_engine[1709]: Jul 7 06:19:03.885656 update_engine[1709]: Jul 7 06:19:03.885656 update_engine[1709]: Jul 7 06:19:03.885656 update_engine[1709]: Jul 7 06:19:03.885656 update_engine[1709]: Jul 7 06:19:03.885656 update_engine[1709]: Jul 7 06:19:03.885656 update_engine[1709]: I20250707 06:19:03.885299 1709 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:19:03.886007 locksmithd[1777]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 06:19:03.886697 update_engine[1709]: I20250707 06:19:03.886662 1709 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:19:03.887296 update_engine[1709]: I20250707 06:19:03.887266 1709 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:19:03.921315 update_engine[1709]: E20250707 06:19:03.921229 1709 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:19:03.921412 update_engine[1709]: I20250707 06:19:03.921391 1709 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 06:19:04.029449 sshd[6087]: Connection closed by 10.200.16.10 port 35116 Jul 7 06:19:04.029990 sshd-session[6085]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:04.033182 systemd[1]: sshd@17-10.200.4.8:22-10.200.16.10:35116.service: Deactivated successfully. Jul 7 06:19:04.035560 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:19:04.037331 systemd-logind[1706]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:19:04.039393 systemd-logind[1706]: Removed session 20. Jul 7 06:19:04.137490 systemd[1]: Started sshd@18-10.200.4.8:22-10.200.16.10:35126.service - OpenSSH per-connection server daemon (10.200.16.10:35126). Jul 7 06:19:04.724495 sshd[6106]: Accepted publickey for core from 10.200.16.10 port 35126 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:19:04.725765 sshd-session[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:04.730690 systemd-logind[1706]: New session 21 of user core. Jul 7 06:19:04.735978 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:19:05.292288 sshd[6108]: Connection closed by 10.200.16.10 port 35126 Jul 7 06:19:05.292855 sshd-session[6106]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:05.296966 systemd[1]: sshd@18-10.200.4.8:22-10.200.16.10:35126.service: Deactivated successfully. Jul 7 06:19:05.299165 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:19:05.300635 systemd-logind[1706]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:19:05.301733 systemd-logind[1706]: Removed session 21. Jul 7 06:19:05.396737 systemd[1]: Started sshd@19-10.200.4.8:22-10.200.16.10:35136.service - OpenSSH per-connection server daemon (10.200.16.10:35136). Jul 7 06:19:05.987312 sshd[6119]: Accepted publickey for core from 10.200.16.10 port 35136 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:19:05.988594 sshd-session[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:05.993525 systemd-logind[1706]: New session 22 of user core. Jul 7 06:19:06.000946 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:19:06.460376 sshd[6121]: Connection closed by 10.200.16.10 port 35136 Jul 7 06:19:06.460929 sshd-session[6119]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:06.464576 systemd[1]: sshd@19-10.200.4.8:22-10.200.16.10:35136.service: Deactivated successfully. Jul 7 06:19:06.466558 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:19:06.467582 systemd-logind[1706]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:19:06.469008 systemd-logind[1706]: Removed session 22. Jul 7 06:19:07.297150 containerd[1734]: time="2025-07-07T06:19:07.297097185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"734a0c07f1d95e65f885909861cb3671b22c34c1b100a394cdae81d8bb0d079c\" id:\"fb88da56fa97d883c26bceb58619f2ab2b325001198792432adbfe8f5ca5471f\" pid:6144 exited_at:{seconds:1751869147 nanos:296567887}" Jul 7 06:19:11.567333 systemd[1]: Started sshd@20-10.200.4.8:22-10.200.16.10:44314.service - OpenSSH per-connection server daemon (10.200.16.10:44314). Jul 7 06:19:12.158184 sshd[6159]: Accepted publickey for core from 10.200.16.10 port 44314 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:19:12.159495 sshd-session[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:12.164286 systemd-logind[1706]: New session 23 of user core. Jul 7 06:19:12.169947 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:19:12.632921 sshd[6161]: Connection closed by 10.200.16.10 port 44314 Jul 7 06:19:12.633439 sshd-session[6159]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:12.638381 systemd[1]: sshd@20-10.200.4.8:22-10.200.16.10:44314.service: Deactivated successfully. Jul 7 06:19:12.640218 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:19:12.641032 systemd-logind[1706]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:19:12.642276 systemd-logind[1706]: Removed session 23. Jul 7 06:19:13.885184 update_engine[1709]: I20250707 06:19:13.885098 1709 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:19:13.885737 update_engine[1709]: I20250707 06:19:13.885423 1709 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:19:13.885787 update_engine[1709]: I20250707 06:19:13.885761 1709 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:19:13.914148 update_engine[1709]: E20250707 06:19:13.914108 1709 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:19:13.914253 update_engine[1709]: I20250707 06:19:13.914178 1709 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 06:19:14.908943 containerd[1734]: time="2025-07-07T06:19:14.908887252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56c947696a15a1fa0d82bfcafd17b6338ed25d413a26d9bd9653d8316c98aecc\" id:\"45904708528dff9c12cc47ac9cdf471bc7ab5c08d09303649a253741539d0e32\" pid:6185 exited_at:{seconds:1751869154 nanos:908565871}" Jul 7 06:19:17.749775 systemd[1]: Started sshd@21-10.200.4.8:22-10.200.16.10:44330.service - OpenSSH per-connection server daemon (10.200.16.10:44330). Jul 7 06:19:18.338009 sshd[6196]: Accepted publickey for core from 10.200.16.10 port 44330 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:19:18.339483 sshd-session[6196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:18.344907 systemd-logind[1706]: New session 24 of user core. Jul 7 06:19:18.350011 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:19:18.823363 sshd[6198]: Connection closed by 10.200.16.10 port 44330 Jul 7 06:19:18.824250 sshd-session[6196]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:18.827247 systemd[1]: sshd@21-10.200.4.8:22-10.200.16.10:44330.service: Deactivated successfully. Jul 7 06:19:18.829497 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:19:18.830946 systemd-logind[1706]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:19:18.832738 systemd-logind[1706]: Removed session 24. Jul 7 06:19:22.923267 containerd[1734]: time="2025-07-07T06:19:22.923208921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0\" id:\"4cb0f3c0dadb8eb585c86ad5eada59110e1dc42c4c1ffaf0461559cd5cdd7ccf\" pid:6223 exited_at:{seconds:1751869162 nanos:922667125}" Jul 7 06:19:23.889529 update_engine[1709]: I20250707 06:19:23.888880 1709 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:19:23.889529 update_engine[1709]: I20250707 06:19:23.889195 1709 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:19:23.889529 update_engine[1709]: I20250707 06:19:23.889477 1709 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:19:23.926051 update_engine[1709]: E20250707 06:19:23.926009 1709 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:19:23.926232 update_engine[1709]: I20250707 06:19:23.926215 1709 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 06:19:23.931895 systemd[1]: Started sshd@22-10.200.4.8:22-10.200.16.10:46170.service - OpenSSH per-connection server daemon (10.200.16.10:46170). Jul 7 06:19:24.526459 sshd[6233]: Accepted publickey for core from 10.200.16.10 port 46170 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:19:24.528293 sshd-session[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:24.534175 systemd-logind[1706]: New session 25 of user core. Jul 7 06:19:24.542005 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:19:25.019890 sshd[6235]: Connection closed by 10.200.16.10 port 46170 Jul 7 06:19:25.020435 sshd-session[6233]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:25.027479 systemd-logind[1706]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:19:25.028161 systemd[1]: sshd@22-10.200.4.8:22-10.200.16.10:46170.service: Deactivated successfully. Jul 7 06:19:25.035075 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:19:25.041252 systemd-logind[1706]: Removed session 25. Jul 7 06:19:30.126889 systemd[1]: Started sshd@23-10.200.4.8:22-10.200.16.10:46666.service - OpenSSH per-connection server daemon (10.200.16.10:46666). Jul 7 06:19:30.723094 sshd[6249]: Accepted publickey for core from 10.200.16.10 port 46666 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:19:30.724398 sshd-session[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:30.728998 systemd-logind[1706]: New session 26 of user core. Jul 7 06:19:30.731965 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 06:19:31.196303 sshd[6251]: Connection closed by 10.200.16.10 port 46666 Jul 7 06:19:31.196890 sshd-session[6249]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:31.199948 systemd[1]: sshd@23-10.200.4.8:22-10.200.16.10:46666.service: Deactivated successfully. Jul 7 06:19:31.202535 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 06:19:31.204068 systemd-logind[1706]: Session 26 logged out. Waiting for processes to exit. Jul 7 06:19:31.206138 systemd-logind[1706]: Removed session 26. Jul 7 06:19:31.808634 containerd[1734]: time="2025-07-07T06:19:31.808584769Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e22fcad058bacc5bcbdd1db979e514b3680b3b6aed0a5b2436c5121d652ebc0\" id:\"84e892e2f559cbac71e669908b7a0cb63fa1f4344656040c81cb3feb0990359c\" pid:6275 exited_at:{seconds:1751869171 nanos:808362556}" Jul 7 06:19:33.885611 update_engine[1709]: I20250707 06:19:33.885524 1709 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:19:33.886203 update_engine[1709]: I20250707 06:19:33.885886 1709 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:19:33.886238 update_engine[1709]: I20250707 06:19:33.886210 1709 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:19:34.014097 update_engine[1709]: E20250707 06:19:34.014040 1709 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:19:34.014266 update_engine[1709]: I20250707 06:19:34.014128 1709 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 06:19:34.014266 update_engine[1709]: I20250707 06:19:34.014139 1709 omaha_request_action.cc:617] Omaha request response: Jul 7 06:19:34.014266 update_engine[1709]: E20250707 06:19:34.014247 1709 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 06:19:34.014368 update_engine[1709]: I20250707 06:19:34.014271 1709 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 06:19:34.014368 update_engine[1709]: I20250707 06:19:34.014277 1709 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 06:19:34.014368 update_engine[1709]: I20250707 06:19:34.014283 1709 update_attempter.cc:306] Processing Done. Jul 7 06:19:34.014368 update_engine[1709]: E20250707 06:19:34.014305 1709 update_attempter.cc:619] Update failed. Jul 7 06:19:34.014368 update_engine[1709]: I20250707 06:19:34.014312 1709 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 06:19:34.014368 update_engine[1709]: I20250707 06:19:34.014319 1709 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 06:19:34.014368 update_engine[1709]: I20250707 06:19:34.014326 1709 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 06:19:34.014552 update_engine[1709]: I20250707 06:19:34.014456 1709 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 06:19:34.014552 update_engine[1709]: I20250707 06:19:34.014488 1709 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 06:19:34.014552 update_engine[1709]: I20250707 06:19:34.014495 1709 omaha_request_action.cc:272] Request: Jul 7 06:19:34.014552 update_engine[1709]: Jul 7 06:19:34.014552 update_engine[1709]: Jul 7 06:19:34.014552 update_engine[1709]: Jul 7 06:19:34.014552 update_engine[1709]: Jul 7 06:19:34.014552 update_engine[1709]: Jul 7 06:19:34.014552 update_engine[1709]: Jul 7 06:19:34.014552 update_engine[1709]: I20250707 06:19:34.014503 1709 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:19:34.015082 update_engine[1709]: I20250707 06:19:34.014703 1709 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:19:34.015082 update_engine[1709]: I20250707 06:19:34.014976 1709 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:19:34.015378 locksmithd[1777]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 06:19:34.054449 update_engine[1709]: E20250707 06:19:34.054407 1709 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:19:34.054571 update_engine[1709]: I20250707 06:19:34.054471 1709 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 06:19:34.054571 update_engine[1709]: I20250707 06:19:34.054477 1709 omaha_request_action.cc:617] Omaha request response: Jul 7 06:19:34.054571 update_engine[1709]: I20250707 06:19:34.054484 1709 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 06:19:34.054571 update_engine[1709]: I20250707 06:19:34.054489 1709 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 06:19:34.054571 update_engine[1709]: I20250707 06:19:34.054495 1709 update_attempter.cc:306] Processing Done. Jul 7 06:19:34.054571 update_engine[1709]: I20250707 06:19:34.054502 1709 update_attempter.cc:310] Error event sent. Jul 7 06:19:34.054571 update_engine[1709]: I20250707 06:19:34.054512 1709 update_check_scheduler.cc:74] Next update check in 40m27s Jul 7 06:19:34.054968 locksmithd[1777]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0