Jul 7 06:13:08.917973 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:13:08.917999 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:13:08.918009 kernel: BIOS-provided physical RAM map: Jul 7 06:13:08.918017 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 06:13:08.918023 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 7 06:13:08.918030 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jul 7 06:13:08.918053 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jul 7 06:13:08.918062 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jul 7 06:13:08.918069 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jul 7 06:13:08.918076 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 7 06:13:08.918082 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 7 06:13:08.918088 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 7 06:13:08.918094 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 7 06:13:08.918100 kernel: printk: legacy bootconsole [earlyser0] enabled Jul 7 06:13:08.918109 kernel: NX (Execute Disable) protection: active Jul 7 06:13:08.918116 kernel: APIC: Static calls initialized Jul 7 06:13:08.918122 kernel: efi: EFI v2.7 by Microsoft Jul 7 06:13:08.918129 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eab5518 RNG=0x3ffd2018 Jul 7 06:13:08.918135 kernel: random: crng init done Jul 7 06:13:08.918142 kernel: secureboot: Secure boot disabled Jul 7 06:13:08.918148 kernel: SMBIOS 3.1.0 present. Jul 7 06:13:08.918154 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jul 7 06:13:08.918161 kernel: DMI: Memory slots populated: 2/2 Jul 7 06:13:08.918168 kernel: Hypervisor detected: Microsoft Hyper-V Jul 7 06:13:08.918174 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jul 7 06:13:08.918181 kernel: Hyper-V: Nested features: 0x3e0101 Jul 7 06:13:08.918187 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 7 06:13:08.918193 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 7 06:13:08.918199 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 7 06:13:08.918206 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 7 06:13:08.918212 kernel: tsc: Detected 2300.000 MHz processor Jul 7 06:13:08.918219 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:13:08.918226 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:13:08.918234 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jul 7 06:13:08.918241 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 06:13:08.918248 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:13:08.918255 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jul 7 06:13:08.918261 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jul 7 06:13:08.918268 kernel: Using GB pages for direct mapping Jul 7 06:13:08.918275 kernel: ACPI: Early table checksum verification disabled Jul 7 06:13:08.918284 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 7 06:13:08.918292 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:13:08.918299 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:13:08.918306 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 7 06:13:08.918313 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 7 06:13:08.918320 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:13:08.918327 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:13:08.918335 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:13:08.918342 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 7 06:13:08.918349 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 7 06:13:08.918356 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:13:08.918362 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 7 06:13:08.918369 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jul 7 06:13:08.918376 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 7 06:13:08.918383 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 7 06:13:08.918390 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 7 06:13:08.918399 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 7 06:13:08.918405 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jul 7 06:13:08.918412 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jul 7 06:13:08.918419 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 7 06:13:08.918426 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 7 06:13:08.918432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jul 7 06:13:08.918440 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jul 7 06:13:08.918446 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jul 7 06:13:08.918453 kernel: Zone ranges: Jul 7 06:13:08.918461 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:13:08.918467 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 06:13:08.918474 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 7 06:13:08.918481 kernel: Device empty Jul 7 06:13:08.918487 kernel: Movable zone start for each node Jul 7 06:13:08.918494 kernel: Early memory node ranges Jul 7 06:13:08.918500 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 7 06:13:08.918507 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jul 7 06:13:08.918514 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jul 7 06:13:08.918521 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 7 06:13:08.918528 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 7 06:13:08.918534 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 7 06:13:08.918541 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:13:08.918548 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 7 06:13:08.918555 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 7 06:13:08.918561 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jul 7 06:13:08.918568 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 7 06:13:08.918575 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 06:13:08.918583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:13:08.918589 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:13:08.918596 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 7 06:13:08.918603 kernel: TSC deadline timer available Jul 7 06:13:08.918609 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:13:08.918616 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:13:08.918623 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:13:08.918629 kernel: CPU topo: Max. threads per core: 2 Jul 7 06:13:08.918636 kernel: CPU topo: Num. cores per package: 1 Jul 7 06:13:08.918644 kernel: CPU topo: Num. threads per package: 2 Jul 7 06:13:08.918650 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 06:13:08.918657 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 7 06:13:08.918663 kernel: Booting paravirtualized kernel on Hyper-V Jul 7 06:13:08.918670 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:13:08.918677 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 06:13:08.918684 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 06:13:08.918690 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 06:13:08.918695 kernel: pcpu-alloc: [0] 0 1 Jul 7 06:13:08.918703 kernel: Hyper-V: PV spinlocks enabled Jul 7 06:13:08.918709 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 06:13:08.918717 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:13:08.918724 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:13:08.918730 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 7 06:13:08.918738 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:13:08.918744 kernel: Fallback order for Node 0: 0 Jul 7 06:13:08.918751 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jul 7 06:13:08.918759 kernel: Policy zone: Normal Jul 7 06:13:08.918765 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:13:08.918772 kernel: software IO TLB: area num 2. Jul 7 06:13:08.918778 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 06:13:08.918785 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:13:08.918792 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:13:08.918799 kernel: Dynamic Preempt: voluntary Jul 7 06:13:08.918805 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:13:08.918813 kernel: rcu: RCU event tracing is enabled. Jul 7 06:13:08.918827 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 06:13:08.918834 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:13:08.918842 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:13:08.918851 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:13:08.918859 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:13:08.918866 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 06:13:08.918874 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:13:08.918882 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:13:08.918890 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:13:08.918898 kernel: Using NULL legacy PIC Jul 7 06:13:08.918907 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 7 06:13:08.918915 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:13:08.918922 kernel: Console: colour dummy device 80x25 Jul 7 06:13:08.918930 kernel: printk: legacy console [tty1] enabled Jul 7 06:13:08.918938 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:13:08.918945 kernel: printk: legacy bootconsole [earlyser0] disabled Jul 7 06:13:08.918953 kernel: ACPI: Core revision 20240827 Jul 7 06:13:08.918962 kernel: Failed to register legacy timer interrupt Jul 7 06:13:08.918970 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:13:08.918977 kernel: x2apic enabled Jul 7 06:13:08.918985 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:13:08.918992 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 7 06:13:08.918999 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 7 06:13:08.919006 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jul 7 06:13:08.919013 kernel: Hyper-V: Using IPI hypercalls Jul 7 06:13:08.919021 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 7 06:13:08.919029 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 7 06:13:08.919052 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 7 06:13:08.919060 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 7 06:13:08.919067 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 7 06:13:08.919075 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 7 06:13:08.919080 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 7 06:13:08.919085 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jul 7 06:13:08.919090 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 06:13:08.919095 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 06:13:08.919100 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 06:13:08.919105 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:13:08.919109 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:13:08.919114 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:13:08.919119 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 7 06:13:08.919123 kernel: RETBleed: Vulnerable Jul 7 06:13:08.919128 kernel: Speculative Store Bypass: Vulnerable Jul 7 06:13:08.919132 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 06:13:08.919137 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:13:08.919141 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:13:08.919147 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:13:08.919152 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 7 06:13:08.919156 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 7 06:13:08.919161 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 7 06:13:08.919165 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jul 7 06:13:08.919170 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jul 7 06:13:08.919174 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jul 7 06:13:08.919179 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:13:08.919183 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 7 06:13:08.919188 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 7 06:13:08.919192 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 7 06:13:08.919197 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jul 7 06:13:08.919202 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jul 7 06:13:08.919206 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jul 7 06:13:08.919211 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jul 7 06:13:08.919215 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:13:08.919220 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:13:08.919224 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:13:08.919229 kernel: landlock: Up and running. Jul 7 06:13:08.919233 kernel: SELinux: Initializing. Jul 7 06:13:08.919238 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 06:13:08.919242 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 06:13:08.919247 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jul 7 06:13:08.919252 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jul 7 06:13:08.919257 kernel: signal: max sigframe size: 11952 Jul 7 06:13:08.919262 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:13:08.919267 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:13:08.919271 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:13:08.919276 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 06:13:08.919281 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:13:08.919285 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:13:08.919290 kernel: .... node #0, CPUs: #1 Jul 7 06:13:08.919295 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 06:13:08.919300 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jul 7 06:13:08.919305 kernel: Memory: 8077024K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 299988K reserved, 0K cma-reserved) Jul 7 06:13:08.919310 kernel: devtmpfs: initialized Jul 7 06:13:08.919314 kernel: x86/mm: Memory block size: 128MB Jul 7 06:13:08.919319 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 7 06:13:08.919324 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:13:08.919328 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 06:13:08.919333 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:13:08.919339 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:13:08.919343 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:13:08.919348 kernel: audit: type=2000 audit(1751868786.027:1): state=initialized audit_enabled=0 res=1 Jul 7 06:13:08.919352 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:13:08.919357 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:13:08.919362 kernel: cpuidle: using governor menu Jul 7 06:13:08.919366 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:13:08.919371 kernel: dca service started, version 1.12.1 Jul 7 06:13:08.919376 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jul 7 06:13:08.919381 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jul 7 06:13:08.919386 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:13:08.919391 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:13:08.919395 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:13:08.919400 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:13:08.919405 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:13:08.919409 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:13:08.919414 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:13:08.919420 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:13:08.919424 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:13:08.919429 kernel: ACPI: Interpreter enabled Jul 7 06:13:08.919433 kernel: ACPI: PM: (supports S0 S5) Jul 7 06:13:08.919438 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:13:08.919443 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:13:08.919447 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 7 06:13:08.919452 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 7 06:13:08.919456 kernel: iommu: Default domain type: Translated Jul 7 06:13:08.919461 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:13:08.919466 kernel: efivars: Registered efivars operations Jul 7 06:13:08.919471 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:13:08.919476 kernel: PCI: System does not support PCI Jul 7 06:13:08.919480 kernel: vgaarb: loaded Jul 7 06:13:08.919485 kernel: clocksource: Switched to clocksource tsc-early Jul 7 06:13:08.919489 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:13:08.919494 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:13:08.919498 kernel: pnp: PnP ACPI init Jul 7 06:13:08.919503 kernel: pnp: PnP ACPI: found 3 devices Jul 7 06:13:08.919509 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:13:08.919513 kernel: NET: Registered PF_INET protocol family Jul 7 06:13:08.919518 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 06:13:08.919522 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 7 06:13:08.919527 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:13:08.919532 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:13:08.919537 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 06:13:08.919541 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 7 06:13:08.919547 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 7 06:13:08.919551 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 7 06:13:08.919556 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:13:08.919561 kernel: NET: Registered PF_XDP protocol family Jul 7 06:13:08.919565 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:13:08.919570 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 06:13:08.919574 kernel: software IO TLB: mapped [mem 0x000000003a9c6000-0x000000003e9c6000] (64MB) Jul 7 06:13:08.919579 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jul 7 06:13:08.919583 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jul 7 06:13:08.919589 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 7 06:13:08.919594 kernel: clocksource: Switched to clocksource tsc Jul 7 06:13:08.919598 kernel: Initialise system trusted keyrings Jul 7 06:13:08.919603 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 7 06:13:08.919608 kernel: Key type asymmetric registered Jul 7 06:13:08.919612 kernel: Asymmetric key parser 'x509' registered Jul 7 06:13:08.919617 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:13:08.919622 kernel: io scheduler mq-deadline registered Jul 7 06:13:08.919626 kernel: io scheduler kyber registered Jul 7 06:13:08.919632 kernel: io scheduler bfq registered Jul 7 06:13:08.919637 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:13:08.919641 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:13:08.919646 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:13:08.919651 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 7 06:13:08.919655 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:13:08.919660 kernel: i8042: PNP: No PS/2 controller found. Jul 7 06:13:08.919748 kernel: rtc_cmos 00:02: registered as rtc0 Jul 7 06:13:08.919795 kernel: rtc_cmos 00:02: setting system clock to 2025-07-07T06:13:08 UTC (1751868788) Jul 7 06:13:08.919836 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 7 06:13:08.919842 kernel: intel_pstate: Intel P-state driver initializing Jul 7 06:13:08.919847 kernel: efifb: probing for efifb Jul 7 06:13:08.919852 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 7 06:13:08.919857 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 7 06:13:08.919861 kernel: efifb: scrolling: redraw Jul 7 06:13:08.919866 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 06:13:08.919871 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 06:13:08.919876 kernel: fb0: EFI VGA frame buffer device Jul 7 06:13:08.919881 kernel: pstore: Using crash dump compression: deflate Jul 7 06:13:08.919886 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 06:13:08.919890 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:13:08.919895 kernel: Segment Routing with IPv6 Jul 7 06:13:08.919900 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:13:08.919904 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:13:08.919909 kernel: Key type dns_resolver registered Jul 7 06:13:08.919913 kernel: IPI shorthand broadcast: enabled Jul 7 06:13:08.919919 kernel: sched_clock: Marking stable (2586004810, 80981693)->(2985403762, -318417259) Jul 7 06:13:08.919923 kernel: registered taskstats version 1 Jul 7 06:13:08.919928 kernel: Loading compiled-in X.509 certificates Jul 7 06:13:08.919933 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:13:08.919938 kernel: Demotion targets for Node 0: null Jul 7 06:13:08.919942 kernel: Key type .fscrypt registered Jul 7 06:13:08.919947 kernel: Key type fscrypt-provisioning registered Jul 7 06:13:08.919952 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:13:08.919956 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:13:08.919962 kernel: ima: No architecture policies found Jul 7 06:13:08.919966 kernel: clk: Disabling unused clocks Jul 7 06:13:08.919971 kernel: Warning: unable to open an initial console. Jul 7 06:13:08.919976 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:13:08.919980 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:13:08.919985 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:13:08.919989 kernel: Run /init as init process Jul 7 06:13:08.919994 kernel: with arguments: Jul 7 06:13:08.919998 kernel: /init Jul 7 06:13:08.920003 kernel: with environment: Jul 7 06:13:08.920008 kernel: HOME=/ Jul 7 06:13:08.920012 kernel: TERM=linux Jul 7 06:13:08.920017 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:13:08.920023 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:13:08.920030 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:13:08.920087 systemd[1]: Detected virtualization microsoft. Jul 7 06:13:08.920097 systemd[1]: Detected architecture x86-64. Jul 7 06:13:08.920105 systemd[1]: Running in initrd. Jul 7 06:13:08.920113 systemd[1]: No hostname configured, using default hostname. Jul 7 06:13:08.920122 systemd[1]: Hostname set to . Jul 7 06:13:08.920130 systemd[1]: Initializing machine ID from random generator. Jul 7 06:13:08.920137 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:13:08.920145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:13:08.920153 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:13:08.920163 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:13:08.920171 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:13:08.920178 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:13:08.920187 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:13:08.920196 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:13:08.920204 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:13:08.920211 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:13:08.920220 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:13:08.920229 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:13:08.920237 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:13:08.920246 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:13:08.920254 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:13:08.920261 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:13:08.920269 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:13:08.920277 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:13:08.920285 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:13:08.920295 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:13:08.920303 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:13:08.920312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:13:08.920319 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:13:08.920327 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:13:08.920334 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:13:08.920341 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:13:08.920350 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:13:08.920360 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:13:08.920368 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:13:08.920376 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:13:08.920407 systemd-journald[205]: Collecting audit messages is disabled. Jul 7 06:13:08.920430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:08.920440 systemd-journald[205]: Journal started Jul 7 06:13:08.920461 systemd-journald[205]: Runtime Journal (/run/log/journal/05372cb4366742a9a4f3eb055866e415) is 8M, max 158.9M, 150.9M free. Jul 7 06:13:08.924049 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:13:08.932372 systemd-modules-load[207]: Inserted module 'overlay' Jul 7 06:13:08.934456 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:13:08.940413 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:13:08.944293 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:13:08.951205 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:13:08.959375 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:13:08.966100 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:13:08.970429 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:08.972337 kernel: Bridge firewalling registered Jul 7 06:13:08.970430 systemd-modules-load[207]: Inserted module 'br_netfilter' Jul 7 06:13:08.979109 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:13:08.982398 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:13:08.984817 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:13:08.990711 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:13:08.994372 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:13:08.998161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:13:08.998376 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:13:09.014916 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:13:09.020240 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:13:09.022007 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:13:09.026120 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:13:09.036842 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:13:09.049650 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:13:09.076849 systemd-resolved[246]: Positive Trust Anchors: Jul 7 06:13:09.076861 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:13:09.076894 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:13:09.095941 systemd-resolved[246]: Defaulting to hostname 'linux'. Jul 7 06:13:09.096672 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:13:09.098720 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:13:09.116053 kernel: SCSI subsystem initialized Jul 7 06:13:09.123048 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:13:09.131049 kernel: iscsi: registered transport (tcp) Jul 7 06:13:09.146104 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:13:09.146139 kernel: QLogic iSCSI HBA Driver Jul 7 06:13:09.157204 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:13:09.170819 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:13:09.172929 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:13:09.199810 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:13:09.202240 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:13:09.247048 kernel: raid6: avx512x4 gen() 46132 MB/s Jul 7 06:13:09.264044 kernel: raid6: avx512x2 gen() 46562 MB/s Jul 7 06:13:09.281042 kernel: raid6: avx512x1 gen() 30389 MB/s Jul 7 06:13:09.299042 kernel: raid6: avx2x4 gen() 43316 MB/s Jul 7 06:13:09.316042 kernel: raid6: avx2x2 gen() 44358 MB/s Jul 7 06:13:09.334254 kernel: raid6: avx2x1 gen() 32493 MB/s Jul 7 06:13:09.334267 kernel: raid6: using algorithm avx512x2 gen() 46562 MB/s Jul 7 06:13:09.353163 kernel: raid6: .... xor() 37406 MB/s, rmw enabled Jul 7 06:13:09.353180 kernel: raid6: using avx512x2 recovery algorithm Jul 7 06:13:09.370050 kernel: xor: automatically using best checksumming function avx Jul 7 06:13:09.471048 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:13:09.474956 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:13:09.478378 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:13:09.492683 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jul 7 06:13:09.496295 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:13:09.502609 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:13:09.516006 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jul 7 06:13:09.531087 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:13:09.532052 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:13:09.563913 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:13:09.570892 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:13:09.602046 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:13:09.611052 kernel: AES CTR mode by8 optimization enabled Jul 7 06:13:09.633299 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:13:09.633397 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:09.640561 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:09.649796 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:09.659400 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:13:09.667024 kernel: hv_vmbus: Vmbus version:5.3 Jul 7 06:13:09.659487 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:09.673211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:09.680047 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 06:13:09.684895 kernel: hv_vmbus: registering driver hid_hyperv Jul 7 06:13:09.685194 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 06:13:09.685264 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 06:13:09.685279 kernel: hv_vmbus: registering driver hv_pci Jul 7 06:13:09.691070 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 7 06:13:09.691128 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 7 06:13:09.695084 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 7 06:13:09.695119 kernel: hv_vmbus: registering driver hv_netvsc Jul 7 06:13:09.701077 kernel: PTP clock support registered Jul 7 06:13:09.717399 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jul 7 06:13:09.723669 kernel: hv_utils: Registering HyperV Utility Driver Jul 7 06:13:09.723690 kernel: hv_vmbus: registering driver hv_utils Jul 7 06:13:09.723701 kernel: hv_utils: Shutdown IC version 3.2 Jul 7 06:13:09.723712 kernel: hv_utils: Heartbeat IC version 3.0 Jul 7 06:13:09.723721 kernel: hv_utils: TimeSync IC version 4.0 Jul 7 06:13:09.723736 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 7 06:13:09.710146 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:09.880693 systemd-resolved[246]: Clock change detected. Flushing caches. Jul 7 06:13:09.886897 kernel: hv_vmbus: registering driver hv_storvsc Jul 7 06:13:09.886988 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5278eb09 (unnamed net_device) (uninitialized): VF slot 1 added Jul 7 06:13:09.890021 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jul 7 06:13:09.892632 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jul 7 06:13:09.892785 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 06:13:09.897048 kernel: scsi host0: storvsc_host_t Jul 7 06:13:09.898505 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jul 7 06:13:09.901122 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 7 06:13:09.901562 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jul 7 06:13:09.914203 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jul 7 06:13:09.914245 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 7 06:13:09.915713 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 06:13:09.915820 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 06:13:09.920025 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jul 7 06:13:09.921719 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 7 06:13:09.933741 kernel: nvme nvme0: pci function c05b:00:00.0 Jul 7 06:13:09.933886 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jul 7 06:13:09.944193 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 06:13:09.957718 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#0 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 06:13:10.184758 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 7 06:13:10.190744 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:13:10.490765 kernel: nvme nvme0: using unchecked data buffer Jul 7 06:13:10.704958 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 7 06:13:10.738732 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jul 7 06:13:10.751679 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jul 7 06:13:10.759249 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 7 06:13:10.759638 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 7 06:13:10.768878 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:13:10.783071 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:13:10.793825 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:13:10.784192 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:13:10.784463 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:13:10.784490 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:13:10.786808 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:13:10.810989 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:13:10.909106 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jul 7 06:13:10.913681 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jul 7 06:13:10.913808 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jul 7 06:13:10.920033 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 06:13:10.920159 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jul 7 06:13:10.923820 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jul 7 06:13:10.927719 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jul 7 06:13:10.929746 kernel: pci 7870:00:00.0: enabling Extended Tags Jul 7 06:13:10.943220 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 06:13:10.943409 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jul 7 06:13:10.944736 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jul 7 06:13:10.951140 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jul 7 06:13:10.957712 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jul 7 06:13:10.960759 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5278eb09 eth0: VF registering: eth1 Jul 7 06:13:10.960912 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jul 7 06:13:10.964722 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jul 7 06:13:11.818722 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:13:11.819556 disk-uuid[669]: The operation has completed successfully. Jul 7 06:13:11.869454 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:13:11.869532 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:13:11.896474 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:13:11.915523 sh[712]: Success Jul 7 06:13:11.945189 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:13:11.945227 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:13:11.945240 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:13:11.953775 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 06:13:12.171041 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:13:12.173937 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:13:12.188417 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:13:12.200712 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:13:12.203996 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (725) Jul 7 06:13:12.204040 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:13:12.205136 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:13:12.206063 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:13:12.498004 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:13:12.501040 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:13:12.503304 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:13:12.503866 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:13:12.514231 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:13:12.533717 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (748) Jul 7 06:13:12.537889 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:13:12.537922 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:13:12.537934 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:13:12.559621 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:13:12.560294 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:13:12.564815 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:13:12.583409 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:13:12.586810 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:13:12.614312 systemd-networkd[894]: lo: Link UP Jul 7 06:13:12.614319 systemd-networkd[894]: lo: Gained carrier Jul 7 06:13:12.616052 systemd-networkd[894]: Enumeration completed Jul 7 06:13:12.624581 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 7 06:13:12.624799 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 7 06:13:12.624935 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5278eb09 eth0: Data path switched to VF: enP30832s1 Jul 7 06:13:12.616110 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:13:12.617360 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:12.617363 systemd-networkd[894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:13:12.620827 systemd[1]: Reached target network.target - Network. Jul 7 06:13:12.625661 systemd-networkd[894]: enP30832s1: Link UP Jul 7 06:13:12.625731 systemd-networkd[894]: eth0: Link UP Jul 7 06:13:12.625811 systemd-networkd[894]: eth0: Gained carrier Jul 7 06:13:12.625820 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:12.635858 systemd-networkd[894]: enP30832s1: Gained carrier Jul 7 06:13:12.642733 systemd-networkd[894]: eth0: DHCPv4 address 10.200.4.33/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jul 7 06:13:13.400945 ignition[857]: Ignition 2.21.0 Jul 7 06:13:13.400957 ignition[857]: Stage: fetch-offline Jul 7 06:13:13.401031 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:13.401037 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:13.401110 ignition[857]: parsed url from cmdline: "" Jul 7 06:13:13.405910 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:13:13.401112 ignition[857]: no config URL provided Jul 7 06:13:13.409076 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 06:13:13.401116 ignition[857]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:13:13.401121 ignition[857]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:13:13.401125 ignition[857]: failed to fetch config: resource requires networking Jul 7 06:13:13.401270 ignition[857]: Ignition finished successfully Jul 7 06:13:13.428516 ignition[904]: Ignition 2.21.0 Jul 7 06:13:13.428526 ignition[904]: Stage: fetch Jul 7 06:13:13.428725 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:13.428733 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:13.428802 ignition[904]: parsed url from cmdline: "" Jul 7 06:13:13.428804 ignition[904]: no config URL provided Jul 7 06:13:13.428808 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:13:13.428813 ignition[904]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:13:13.428840 ignition[904]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 7 06:13:13.526027 ignition[904]: GET result: OK Jul 7 06:13:13.526145 ignition[904]: config has been read from IMDS userdata Jul 7 06:13:13.526180 ignition[904]: parsing config with SHA512: 78bb6b41741043f1f6789953c85a3d9ff6c81ed04c4a89b6f3a90af454d6b4c5fd69922d3b302d3a94bc8958fcaa0296250eacc2083fc9bdd312043f13053819 Jul 7 06:13:13.532567 unknown[904]: fetched base config from "system" Jul 7 06:13:13.532576 unknown[904]: fetched base config from "system" Jul 7 06:13:13.532884 ignition[904]: fetch: fetch complete Jul 7 06:13:13.532579 unknown[904]: fetched user config from "azure" Jul 7 06:13:13.532888 ignition[904]: fetch: fetch passed Jul 7 06:13:13.535046 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 06:13:13.532919 ignition[904]: Ignition finished successfully Jul 7 06:13:13.541353 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:13:13.560570 ignition[911]: Ignition 2.21.0 Jul 7 06:13:13.560577 ignition[911]: Stage: kargs Jul 7 06:13:13.561331 ignition[911]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:13.561338 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:13.564881 ignition[911]: kargs: kargs passed Jul 7 06:13:13.565876 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:13:13.564917 ignition[911]: Ignition finished successfully Jul 7 06:13:13.571804 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:13:13.595255 ignition[918]: Ignition 2.21.0 Jul 7 06:13:13.595264 ignition[918]: Stage: disks Jul 7 06:13:13.596809 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:13:13.595422 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:13.599191 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:13:13.595429 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:13.596064 ignition[918]: disks: disks passed Jul 7 06:13:13.596092 ignition[918]: Ignition finished successfully Jul 7 06:13:13.607787 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:13:13.610752 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:13:13.614755 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:13:13.618742 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:13:13.623965 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:13:13.685506 systemd-fsck[926]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 7 06:13:13.689970 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:13:13.694339 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:13:13.977717 kernel: EXT4-fs (nvme0n1p9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:13:13.978373 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:13:13.979660 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:13:13.998208 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:13:14.001145 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:13:14.016678 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 06:13:14.021779 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:13:14.021808 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:13:14.025090 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:13:14.032403 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (935) Jul 7 06:13:14.034858 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:13:14.040768 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:13:14.040788 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:13:14.040798 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:13:14.044337 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:13:14.271863 systemd-networkd[894]: eth0: Gained IPv6LL Jul 7 06:13:14.591821 systemd-networkd[894]: enP30832s1: Gained IPv6LL Jul 7 06:13:14.627750 coreos-metadata[937]: Jul 07 06:13:14.627 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 06:13:14.630671 coreos-metadata[937]: Jul 07 06:13:14.630 INFO Fetch successful Jul 7 06:13:14.631849 coreos-metadata[937]: Jul 07 06:13:14.631 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 7 06:13:14.640717 coreos-metadata[937]: Jul 07 06:13:14.640 INFO Fetch successful Jul 7 06:13:14.644787 coreos-metadata[937]: Jul 07 06:13:14.641 INFO wrote hostname ci-4372.0.1-a-04b45ab1a6 to /sysroot/etc/hostname Jul 7 06:13:14.643183 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:13:14.831195 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:13:14.913258 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:13:14.918293 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:13:14.923027 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:13:15.602689 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:13:15.605502 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:13:15.620111 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:13:15.627868 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:13:15.632429 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:13:15.651662 ignition[1059]: INFO : Ignition 2.21.0 Jul 7 06:13:15.651662 ignition[1059]: INFO : Stage: mount Jul 7 06:13:15.656815 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:15.656815 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:15.656815 ignition[1059]: INFO : mount: mount passed Jul 7 06:13:15.656815 ignition[1059]: INFO : Ignition finished successfully Jul 7 06:13:15.654412 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:13:15.655182 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:13:15.668674 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:13:15.678056 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:13:15.698719 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (1072) Jul 7 06:13:15.700778 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:13:15.700818 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:13:15.702083 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:13:15.705687 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:13:15.727373 ignition[1089]: INFO : Ignition 2.21.0 Jul 7 06:13:15.727373 ignition[1089]: INFO : Stage: files Jul 7 06:13:15.730393 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:15.730393 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:15.730393 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:13:15.743356 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:13:15.745228 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:13:15.793909 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:13:15.796771 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:13:15.796771 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:13:15.795919 unknown[1089]: wrote ssh authorized keys file for user: core Jul 7 06:13:15.813661 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 06:13:15.817749 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 06:13:16.073287 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:13:16.820392 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 06:13:16.824778 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:13:16.824778 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:13:16.824778 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:13:16.824778 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:13:16.824778 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:13:16.824778 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:13:16.824778 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:13:16.824778 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:13:16.846300 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:13:16.846300 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:13:16.846300 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:13:16.846300 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:13:16.846300 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:13:16.846300 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 06:13:17.536620 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:13:18.436565 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:13:18.436565 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:13:18.710865 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:13:18.876853 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:13:18.876853 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:13:18.876853 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:13:18.885766 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:13:18.885766 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:13:18.885766 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:13:18.885766 ignition[1089]: INFO : files: files passed Jul 7 06:13:18.885766 ignition[1089]: INFO : Ignition finished successfully Jul 7 06:13:18.884534 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:13:18.890815 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:13:18.896815 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:13:18.904429 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:13:18.919849 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:13:18.919849 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:13:18.904509 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:13:18.921926 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:13:18.914301 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:13:18.919441 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:13:18.931376 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:13:18.962829 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:13:18.962904 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:13:18.963646 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:13:18.963990 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:13:18.964065 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:13:18.965804 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:13:18.976425 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:13:18.983710 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:13:18.998185 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:13:18.999844 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:13:19.003865 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:13:19.006929 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:13:19.007034 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:13:19.007569 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:13:19.008136 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:13:19.008404 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:13:19.018745 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:13:19.021680 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:13:19.023892 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:13:19.025761 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:13:19.029001 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:13:19.030729 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:13:19.033753 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:13:19.036308 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:13:19.039811 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:13:19.039938 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:13:19.040461 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:13:19.040729 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:13:19.041137 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:13:19.041579 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:13:19.041658 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:13:19.041748 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:13:19.047491 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:13:19.047601 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:13:19.061005 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:13:19.061135 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:13:19.062446 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 06:13:19.062541 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:13:19.064800 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:13:19.064874 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:13:19.064988 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:13:19.066737 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:13:19.066788 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:13:19.066927 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:13:19.067213 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:13:19.105818 ignition[1144]: INFO : Ignition 2.21.0 Jul 7 06:13:19.105818 ignition[1144]: INFO : Stage: umount Jul 7 06:13:19.105818 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:19.105818 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:19.105818 ignition[1144]: INFO : umount: umount passed Jul 7 06:13:19.105818 ignition[1144]: INFO : Ignition finished successfully Jul 7 06:13:19.067302 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:13:19.077774 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:13:19.080922 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:13:19.106569 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:13:19.106665 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:13:19.111238 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:13:19.111523 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:13:19.111552 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:13:19.114774 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:13:19.114809 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:13:19.118755 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 06:13:19.118786 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 06:13:19.122763 systemd[1]: Stopped target network.target - Network. Jul 7 06:13:19.124611 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:13:19.124651 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:13:19.127750 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:13:19.130109 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:13:19.133768 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:13:19.137743 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:13:19.141737 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:13:19.144228 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:13:19.144257 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:13:19.147019 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:13:19.147044 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:13:19.149388 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:13:19.149429 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:13:19.153755 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:13:19.153787 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:13:19.154430 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:13:19.154677 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:13:19.169573 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:13:19.169657 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:13:19.176598 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:13:19.176875 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:13:19.176938 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:13:19.192482 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:13:19.193317 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:13:19.195065 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:13:19.195123 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:13:19.199305 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:13:19.199691 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:13:19.199743 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:13:19.200036 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:13:19.200062 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:13:19.205455 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:13:19.237768 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5278eb09 eth0: Data path switched from VF: enP30832s1 Jul 7 06:13:19.237918 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 7 06:13:19.205530 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:13:19.209780 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:13:19.209825 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:13:19.214453 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:13:19.220813 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:13:19.220864 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:13:19.229341 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:13:19.229454 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:13:19.233941 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:13:19.234006 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:13:19.240234 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:13:19.240267 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:13:19.245418 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:13:19.245468 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:13:19.261770 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:13:19.261820 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:13:19.265802 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:13:19.265848 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:13:19.269789 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:13:19.275744 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:13:19.275798 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:13:19.277610 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:13:19.277648 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:13:19.281010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:13:19.281059 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:19.285146 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 06:13:19.285191 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 06:13:19.285226 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:13:19.285466 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:13:19.285560 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:13:19.300918 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:13:19.300993 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:13:20.165406 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:13:20.165537 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:13:20.169991 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:13:20.174743 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:13:20.174796 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:13:20.180814 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:13:20.192863 systemd[1]: Switching root. Jul 7 06:13:22.428927 systemd-journald[205]: Journal stopped Jul 7 06:13:32.862584 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jul 7 06:13:32.862613 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:13:32.862625 kernel: SELinux: policy capability open_perms=1 Jul 7 06:13:32.862634 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:13:32.862641 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:13:32.862649 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:13:32.862659 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:13:32.862667 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:13:32.862674 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:13:32.862682 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:13:32.862690 kernel: audit: type=1403 audit(1751868808.817:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:13:32.862716 systemd[1]: Successfully loaded SELinux policy in 186.945ms. Jul 7 06:13:32.862725 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.194ms. Jul 7 06:13:32.862737 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:13:32.862746 systemd[1]: Detected virtualization microsoft. Jul 7 06:13:32.862754 systemd[1]: Detected architecture x86-64. Jul 7 06:13:32.862761 systemd[1]: Detected first boot. Jul 7 06:13:32.864437 systemd[1]: Hostname set to . Jul 7 06:13:32.864507 systemd[1]: Initializing machine ID from random generator. Jul 7 06:13:32.864518 zram_generator::config[1188]: No configuration found. Jul 7 06:13:32.864528 kernel: Guest personality initialized and is inactive Jul 7 06:13:32.864537 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jul 7 06:13:32.864545 kernel: Initialized host personality Jul 7 06:13:32.864552 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:13:32.864561 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:13:32.864572 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:13:32.864580 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:13:32.864589 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:13:32.864598 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:13:32.864662 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:13:32.864792 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:13:32.864949 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:13:32.865012 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:13:32.865022 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:13:32.865081 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:13:32.865150 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:13:32.865214 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:13:32.865225 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:13:32.865237 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:13:32.865248 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:13:32.865265 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:13:32.865279 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:13:32.865290 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:13:32.865302 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:13:32.865315 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:13:32.865327 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:13:32.865337 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:13:32.865351 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:13:32.865364 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:13:32.865375 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:13:32.865386 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:13:32.865397 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:13:32.865411 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:13:32.865422 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:13:32.865434 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:13:32.865446 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:13:32.865461 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:13:32.865475 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:13:32.865485 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:13:32.865499 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:13:32.865522 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:13:32.865534 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:13:32.865546 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:13:32.865560 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:13:32.865571 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:32.865582 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:13:32.865593 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:13:32.865603 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:13:32.865616 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:13:32.865632 systemd[1]: Reached target machines.target - Containers. Jul 7 06:13:32.865646 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:13:32.865659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:13:32.865672 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:13:32.865685 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:13:32.870548 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:13:32.870578 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:13:32.870588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:13:32.870597 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:13:32.870611 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:13:32.870621 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:13:32.870630 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:13:32.870640 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:13:32.870649 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:13:32.870658 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:13:32.870668 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:13:32.870678 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:13:32.870689 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:13:32.870717 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:13:32.870728 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:13:32.870737 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:13:32.870748 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:13:32.870758 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:13:32.870766 systemd[1]: Stopped verity-setup.service. Jul 7 06:13:32.870776 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:32.870786 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:13:32.870794 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:13:32.870803 kernel: loop: module loaded Jul 7 06:13:32.870836 systemd-journald[1266]: Collecting audit messages is disabled. Jul 7 06:13:32.870862 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:13:32.870872 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:13:32.870881 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:13:32.870890 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:13:32.870900 systemd-journald[1266]: Journal started Jul 7 06:13:32.870922 systemd-journald[1266]: Runtime Journal (/run/log/journal/6701349175fd4666a8ec19e0d2c95235) is 8M, max 158.9M, 150.9M free. Jul 7 06:13:32.515636 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:13:32.524068 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 06:13:32.524363 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:13:32.874502 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:13:32.877731 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:13:32.881190 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:13:32.881325 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:13:32.884080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:13:32.884412 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:13:32.887603 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:13:32.888160 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:13:32.897632 kernel: fuse: init (API version 7.41) Jul 7 06:13:32.892271 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:13:32.892417 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:13:32.894847 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:13:32.898340 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:13:32.901931 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:13:32.902060 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:13:32.905036 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:13:32.909966 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:13:32.919838 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:13:32.928151 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:13:32.932769 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:13:32.938779 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:13:32.938807 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:13:32.941795 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:13:32.953979 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:13:32.956330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:13:32.957870 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:13:32.964687 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:13:32.966844 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:13:32.968058 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:13:32.971848 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:13:32.974796 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:13:32.980805 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:13:32.984835 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:13:32.986903 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:13:32.991395 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:13:33.382727 systemd-journald[1266]: Time spent on flushing to /var/log/journal/6701349175fd4666a8ec19e0d2c95235 is 7.793ms for 980 entries. Jul 7 06:13:33.382727 systemd-journald[1266]: System Journal (/var/log/journal/6701349175fd4666a8ec19e0d2c95235) is 8M, max 2.6G, 2.6G free. Jul 7 06:13:34.314805 systemd-journald[1266]: Received client request to flush runtime journal. Jul 7 06:13:34.314859 kernel: ACPI: bus type drm_connector registered Jul 7 06:13:34.314881 kernel: loop0: detected capacity change from 0 to 28496 Jul 7 06:13:33.414419 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:13:33.414540 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:13:33.512197 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:13:33.514482 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:13:33.517296 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:13:33.520134 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:13:33.526228 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:13:33.532999 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:13:34.316618 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:13:34.337756 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:13:34.340840 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:13:34.397763 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jul 7 06:13:34.397777 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jul 7 06:13:34.401399 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:13:34.425221 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:13:34.425687 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:13:34.492725 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:13:34.528716 kernel: loop1: detected capacity change from 0 to 221472 Jul 7 06:13:34.573732 kernel: loop2: detected capacity change from 0 to 113872 Jul 7 06:13:34.922729 kernel: loop3: detected capacity change from 0 to 146240 Jul 7 06:13:35.000766 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:13:35.004978 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:13:35.031878 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Jul 7 06:13:35.140403 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:13:35.144464 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:13:35.178952 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:13:35.229819 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:13:35.287720 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:13:35.288234 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:13:35.292730 kernel: hv_vmbus: registering driver hyperv_fb Jul 7 06:13:35.294721 kernel: hv_vmbus: registering driver hv_balloon Jul 7 06:13:35.294767 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 7 06:13:35.304896 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 7 06:13:35.308732 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 7 06:13:35.312803 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#65 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 06:13:35.324639 kernel: Console: switching to colour dummy device 80x25 Jul 7 06:13:35.326770 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 06:13:35.356730 kernel: loop4: detected capacity change from 0 to 28496 Jul 7 06:13:35.375771 kernel: loop5: detected capacity change from 0 to 221472 Jul 7 06:13:35.396722 kernel: loop6: detected capacity change from 0 to 113872 Jul 7 06:13:35.410764 kernel: loop7: detected capacity change from 0 to 146240 Jul 7 06:13:35.426899 (sd-merge)[1419]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 7 06:13:35.431379 (sd-merge)[1419]: Merged extensions into '/usr'. Jul 7 06:13:35.437807 systemd[1]: Reload requested from client PID 1320 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:13:35.437823 systemd[1]: Reloading... Jul 7 06:13:35.479759 systemd-networkd[1357]: lo: Link UP Jul 7 06:13:35.480248 systemd-networkd[1357]: lo: Gained carrier Jul 7 06:13:35.484117 systemd-networkd[1357]: Enumeration completed Jul 7 06:13:35.484376 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:35.484379 systemd-networkd[1357]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:13:35.486872 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 7 06:13:35.488739 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 7 06:13:35.491774 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5278eb09 eth0: Data path switched to VF: enP30832s1 Jul 7 06:13:35.492144 systemd-networkd[1357]: enP30832s1: Link UP Jul 7 06:13:35.492530 systemd-networkd[1357]: eth0: Link UP Jul 7 06:13:35.492588 systemd-networkd[1357]: eth0: Gained carrier Jul 7 06:13:35.492631 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:35.498978 systemd-networkd[1357]: enP30832s1: Gained carrier Jul 7 06:13:35.507750 systemd-networkd[1357]: eth0: DHCPv4 address 10.200.4.33/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jul 7 06:13:35.546726 zram_generator::config[1458]: No configuration found. Jul 7 06:13:35.681596 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:13:35.694774 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 7 06:13:35.773028 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 7 06:13:35.776145 systemd[1]: Reloading finished in 338 ms. Jul 7 06:13:35.806113 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:13:35.808973 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:13:35.842480 systemd[1]: Starting ensure-sysext.service... Jul 7 06:13:35.844937 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:13:35.850576 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:13:35.856057 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:13:35.859079 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:13:35.871886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:35.887205 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:13:35.889126 systemd-tmpfiles[1528]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:13:35.889146 systemd-tmpfiles[1528]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:13:35.889310 systemd-tmpfiles[1528]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:13:35.889492 systemd-tmpfiles[1528]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:13:35.890058 systemd-tmpfiles[1528]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:13:35.890251 systemd-tmpfiles[1528]: ACLs are not supported, ignoring. Jul 7 06:13:35.890291 systemd-tmpfiles[1528]: ACLs are not supported, ignoring. Jul 7 06:13:35.896086 systemd-tmpfiles[1528]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:13:35.896097 systemd-tmpfiles[1528]: Skipping /boot Jul 7 06:13:35.896773 systemd[1]: Reload requested from client PID 1524 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:13:35.896780 systemd[1]: Reloading... Jul 7 06:13:35.906957 systemd-tmpfiles[1528]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:13:35.906966 systemd-tmpfiles[1528]: Skipping /boot Jul 7 06:13:35.956719 zram_generator::config[1565]: No configuration found. Jul 7 06:13:36.025733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:13:36.104931 systemd[1]: Reloading finished in 207 ms. Jul 7 06:13:36.121612 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:13:36.123616 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:13:36.131265 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:36.132125 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:13:36.144609 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:13:36.148494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:13:36.150864 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:13:36.154854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:13:36.158782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:13:36.159313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:13:36.159405 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:13:36.161796 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:13:36.163935 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:13:36.165940 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:13:36.166325 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:36.167520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:13:36.168317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:13:36.171118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:13:36.171292 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:13:36.173298 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:13:36.176354 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:36.176501 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:13:36.179980 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:13:36.184821 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:13:36.186195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:13:36.186306 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:13:36.186383 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:36.198468 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:13:36.198756 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:13:36.201215 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:13:36.201748 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:13:36.202693 systemd[1]: Finished ensure-sysext.service. Jul 7 06:13:36.203125 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:13:36.203278 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:13:36.208052 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:13:36.212142 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:36.212298 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:13:36.213238 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:13:36.213761 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:13:36.213788 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:13:36.213819 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:13:36.213853 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:13:36.213878 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:13:36.214319 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:36.226399 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:13:36.227086 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:13:36.244397 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:13:36.258857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:36.279353 augenrules[1669]: No rules Jul 7 06:13:36.279627 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:13:36.279844 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:13:36.291223 systemd-resolved[1633]: Positive Trust Anchors: Jul 7 06:13:36.291232 systemd-resolved[1633]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:13:36.291259 systemd-resolved[1633]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:13:36.294095 systemd-resolved[1633]: Using system hostname 'ci-4372.0.1-a-04b45ab1a6'. Jul 7 06:13:36.295415 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:13:36.297842 systemd[1]: Reached target network.target - Network. Jul 7 06:13:36.298905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:13:36.595136 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:13:36.597240 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:13:37.247839 systemd-networkd[1357]: eth0: Gained IPv6LL Jul 7 06:13:37.248277 systemd-networkd[1357]: enP30832s1: Gained IPv6LL Jul 7 06:13:37.250090 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:13:37.252019 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:13:38.891083 ldconfig[1308]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:13:38.903785 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:13:38.907867 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:13:38.924284 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:13:38.927911 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:13:38.930841 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:13:38.932245 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:13:38.933734 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:13:38.936836 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:13:38.939822 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:13:38.941333 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:13:38.944736 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:13:38.944765 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:13:38.945837 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:13:38.948075 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:13:38.951570 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:13:38.955040 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:13:38.956786 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:13:38.959776 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:13:38.969093 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:13:38.970743 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:13:38.974374 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:13:38.977827 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:13:38.979138 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:13:38.981906 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:13:38.981936 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:13:38.983779 systemd[1]: Starting chronyd.service - NTP client/server... Jul 7 06:13:38.987314 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:13:38.991862 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 06:13:38.994608 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:13:38.999388 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:13:39.004673 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:13:39.007675 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:13:39.009339 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:13:39.010920 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:13:39.014793 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jul 7 06:13:39.015789 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 7 06:13:39.018849 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 7 06:13:39.020841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:13:39.025878 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:13:39.031569 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:13:39.033890 jq[1687]: false Jul 7 06:13:39.034426 KVP[1693]: KVP starting; pid is:1693 Jul 7 06:13:39.037151 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:13:39.041877 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:13:39.045755 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Refreshing passwd entry cache Jul 7 06:13:39.044784 oslogin_cache_refresh[1692]: Refreshing passwd entry cache Jul 7 06:13:39.048830 kernel: hv_utils: KVP IC version 4.0 Jul 7 06:13:39.048768 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:13:39.048541 KVP[1693]: KVP LIC Version: 3.1 Jul 7 06:13:39.053889 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:13:39.057591 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:13:39.057988 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:13:39.061533 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:13:39.065596 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:13:39.074161 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:13:39.077666 extend-filesystems[1690]: Found /dev/nvme0n1p6 Jul 7 06:13:39.079371 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:13:39.079540 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:13:39.085004 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:13:39.085166 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:13:39.095803 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Failure getting users, quitting Jul 7 06:13:39.095803 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:13:39.095782 oslogin_cache_refresh[1692]: Failure getting users, quitting Jul 7 06:13:39.095796 oslogin_cache_refresh[1692]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:13:39.099490 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Refreshing group entry cache Jul 7 06:13:39.099176 oslogin_cache_refresh[1692]: Refreshing group entry cache Jul 7 06:13:39.101953 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:13:39.102732 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:13:39.107458 jq[1706]: true Jul 7 06:13:39.109625 (chronyd)[1682]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 7 06:13:39.122442 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Failure getting groups, quitting Jul 7 06:13:39.122442 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:13:39.121838 oslogin_cache_refresh[1692]: Failure getting groups, quitting Jul 7 06:13:39.121846 oslogin_cache_refresh[1692]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:13:39.123181 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:13:39.123343 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:13:39.126389 chronyd[1732]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 7 06:13:39.129063 extend-filesystems[1690]: Found /dev/nvme0n1p9 Jul 7 06:13:39.130396 (ntainerd)[1726]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:13:39.134157 extend-filesystems[1690]: Checking size of /dev/nvme0n1p9 Jul 7 06:13:39.140753 jq[1728]: true Jul 7 06:13:39.334142 chronyd[1732]: Timezone right/UTC failed leap second check, ignoring Jul 7 06:13:39.334297 chronyd[1732]: Loaded seccomp filter (level 2) Jul 7 06:13:39.335345 systemd[1]: Started chronyd.service - NTP client/server. Jul 7 06:13:39.387028 extend-filesystems[1690]: Old size kept for /dev/nvme0n1p9 Jul 7 06:13:39.387947 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:13:39.388109 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:13:39.421829 update_engine[1704]: I20250707 06:13:39.420882 1704 main.cc:92] Flatcar Update Engine starting Jul 7 06:13:39.447938 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:13:39.451541 systemd-logind[1703]: New seat seat0. Jul 7 06:13:39.462017 systemd-logind[1703]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 7 06:13:39.462273 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:13:39.473691 tar[1714]: linux-amd64/helm Jul 7 06:13:39.687341 dbus-daemon[1685]: [system] SELinux support is enabled Jul 7 06:13:39.687461 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:13:39.809102 update_engine[1704]: I20250707 06:13:39.692811 1704 update_check_scheduler.cc:74] Next update check in 3m14s Jul 7 06:13:39.695266 dbus-daemon[1685]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 06:13:39.694208 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:13:39.694233 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:13:39.697818 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:13:39.697836 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:13:39.701822 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:13:39.707886 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:13:39.813309 sshd_keygen[1727]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:13:39.847830 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:13:39.851645 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:13:39.858535 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 7 06:13:39.888470 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:13:39.889513 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:13:39.895681 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:13:39.901907 coreos-metadata[1684]: Jul 07 06:13:39.901 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 06:13:39.906556 coreos-metadata[1684]: Jul 07 06:13:39.904 INFO Fetch successful Jul 7 06:13:39.906556 coreos-metadata[1684]: Jul 07 06:13:39.905 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 7 06:13:39.911049 coreos-metadata[1684]: Jul 07 06:13:39.910 INFO Fetch successful Jul 7 06:13:39.911173 coreos-metadata[1684]: Jul 07 06:13:39.911 INFO Fetching http://168.63.129.16/machine/dc2bc343-f3bb-4696-8b29-9f0e1df50554/33fcf24a%2D710c%2D4117%2Db270%2Dff8281d53df8.%5Fci%2D4372.0.1%2Da%2D04b45ab1a6?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 7 06:13:39.912442 coreos-metadata[1684]: Jul 07 06:13:39.912 INFO Fetch successful Jul 7 06:13:39.913833 coreos-metadata[1684]: Jul 07 06:13:39.912 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 7 06:13:39.914006 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 7 06:13:39.928730 coreos-metadata[1684]: Jul 07 06:13:39.928 INFO Fetch successful Jul 7 06:13:39.944902 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 06:13:39.948043 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:13:40.038343 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:13:40.044815 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:13:40.047903 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:13:40.050490 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:13:40.448488 tar[1714]: linux-amd64/LICENSE Jul 7 06:13:40.663880 tar[1714]: linux-amd64/README.md Jul 7 06:13:40.669500 bash[1753]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:13:40.666632 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:13:40.670684 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:13:40.681012 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:13:40.682500 locksmithd[1790]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:13:40.842214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:13:40.854929 (kubelet)[1834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:13:41.292955 kubelet[1834]: E0707 06:13:41.292918 1834 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:13:41.294314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:13:41.294428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:13:41.294812 systemd[1]: kubelet.service: Consumed 814ms CPU time, 264M memory peak. Jul 7 06:13:41.883376 containerd[1726]: time="2025-07-07T06:13:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:13:41.883971 containerd[1726]: time="2025-07-07T06:13:41.883947158Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:13:41.889450 containerd[1726]: time="2025-07-07T06:13:41.889418837Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.975µs" Jul 7 06:13:41.889450 containerd[1726]: time="2025-07-07T06:13:41.889442496Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:13:41.889523 containerd[1726]: time="2025-07-07T06:13:41.889458554Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:13:41.889599 containerd[1726]: time="2025-07-07T06:13:41.889584529Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:13:41.889599 containerd[1726]: time="2025-07-07T06:13:41.889596650Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:13:41.889647 containerd[1726]: time="2025-07-07T06:13:41.889613774Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:13:41.889664 containerd[1726]: time="2025-07-07T06:13:41.889656832Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:13:41.889681 containerd[1726]: time="2025-07-07T06:13:41.889665090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:13:41.889862 containerd[1726]: time="2025-07-07T06:13:41.889845777Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:13:41.889887 containerd[1726]: time="2025-07-07T06:13:41.889870383Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:13:41.889887 containerd[1726]: time="2025-07-07T06:13:41.889881814Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:13:41.889929 containerd[1726]: time="2025-07-07T06:13:41.889889241Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:13:41.889948 containerd[1726]: time="2025-07-07T06:13:41.889939317Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:13:41.890109 containerd[1726]: time="2025-07-07T06:13:41.890092453Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:13:41.890136 containerd[1726]: time="2025-07-07T06:13:41.890113860Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:13:41.890136 containerd[1726]: time="2025-07-07T06:13:41.890123176Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:13:41.890171 containerd[1726]: time="2025-07-07T06:13:41.890149810Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:13:41.890383 containerd[1726]: time="2025-07-07T06:13:41.890349999Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:13:41.890436 containerd[1726]: time="2025-07-07T06:13:41.890410371Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:13:42.227207 containerd[1726]: time="2025-07-07T06:13:42.227163897Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:13:42.227207 containerd[1726]: time="2025-07-07T06:13:42.227218234Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:13:42.227359 containerd[1726]: time="2025-07-07T06:13:42.227232375Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:13:42.227359 containerd[1726]: time="2025-07-07T06:13:42.227241862Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:13:42.227359 containerd[1726]: time="2025-07-07T06:13:42.227252503Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:13:42.227359 containerd[1726]: time="2025-07-07T06:13:42.227261524Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:13:42.227359 containerd[1726]: time="2025-07-07T06:13:42.227271565Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:13:42.227359 containerd[1726]: time="2025-07-07T06:13:42.227282537Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:13:42.227359 containerd[1726]: time="2025-07-07T06:13:42.227293189Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:13:42.227359 containerd[1726]: time="2025-07-07T06:13:42.227303992Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:13:42.227359 containerd[1726]: time="2025-07-07T06:13:42.227311446Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:13:42.227359 containerd[1726]: time="2025-07-07T06:13:42.227322523Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:13:42.227555 containerd[1726]: time="2025-07-07T06:13:42.227427952Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:13:42.227555 containerd[1726]: time="2025-07-07T06:13:42.227446828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:13:42.227555 containerd[1726]: time="2025-07-07T06:13:42.227460848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:13:42.227555 containerd[1726]: time="2025-07-07T06:13:42.227471438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:13:42.227555 containerd[1726]: time="2025-07-07T06:13:42.227481019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:13:42.227555 containerd[1726]: time="2025-07-07T06:13:42.227489529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:13:42.227555 containerd[1726]: time="2025-07-07T06:13:42.227499943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:13:42.227555 containerd[1726]: time="2025-07-07T06:13:42.227508788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:13:42.227555 containerd[1726]: time="2025-07-07T06:13:42.227518489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:13:42.227555 containerd[1726]: time="2025-07-07T06:13:42.227527546Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:13:42.227555 containerd[1726]: time="2025-07-07T06:13:42.227537252Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:13:42.227799 containerd[1726]: time="2025-07-07T06:13:42.227595919Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:13:42.227799 containerd[1726]: time="2025-07-07T06:13:42.227608205Z" level=info msg="Start snapshots syncer" Jul 7 06:13:42.227799 containerd[1726]: time="2025-07-07T06:13:42.227628334Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:13:42.227926 containerd[1726]: time="2025-07-07T06:13:42.227865089Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:13:42.227926 containerd[1726]: time="2025-07-07T06:13:42.227933559Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:13:42.228118 containerd[1726]: time="2025-07-07T06:13:42.228000754Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:13:42.228118 containerd[1726]: time="2025-07-07T06:13:42.228079422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:13:42.228118 containerd[1726]: time="2025-07-07T06:13:42.228101056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:13:42.228118 containerd[1726]: time="2025-07-07T06:13:42.228111501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:13:42.228200 containerd[1726]: time="2025-07-07T06:13:42.228120139Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:13:42.228200 containerd[1726]: time="2025-07-07T06:13:42.228130394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:13:42.228200 containerd[1726]: time="2025-07-07T06:13:42.228140459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:13:42.228200 containerd[1726]: time="2025-07-07T06:13:42.228150600Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:13:42.228200 containerd[1726]: time="2025-07-07T06:13:42.228171005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:13:42.228200 containerd[1726]: time="2025-07-07T06:13:42.228180428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:13:42.228200 containerd[1726]: time="2025-07-07T06:13:42.228189883Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:13:42.228379 containerd[1726]: time="2025-07-07T06:13:42.228216899Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:13:42.228379 containerd[1726]: time="2025-07-07T06:13:42.228228700Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:13:42.228379 containerd[1726]: time="2025-07-07T06:13:42.228236613Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:13:42.228379 containerd[1726]: time="2025-07-07T06:13:42.228245498Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:13:42.228379 containerd[1726]: time="2025-07-07T06:13:42.228252479Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:13:42.228379 containerd[1726]: time="2025-07-07T06:13:42.228261599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:13:42.228379 containerd[1726]: time="2025-07-07T06:13:42.228270816Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:13:42.228379 containerd[1726]: time="2025-07-07T06:13:42.228284879Z" level=info msg="runtime interface created" Jul 7 06:13:42.228379 containerd[1726]: time="2025-07-07T06:13:42.228289298Z" level=info msg="created NRI interface" Jul 7 06:13:42.228379 containerd[1726]: time="2025-07-07T06:13:42.228296924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:13:42.228379 containerd[1726]: time="2025-07-07T06:13:42.228306155Z" level=info msg="Connect containerd service" Jul 7 06:13:42.228379 containerd[1726]: time="2025-07-07T06:13:42.228336399Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:13:42.228908 containerd[1726]: time="2025-07-07T06:13:42.228888330Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:13:43.528200 containerd[1726]: time="2025-07-07T06:13:43.527950207Z" level=info msg="Start subscribing containerd event" Jul 7 06:13:43.528200 containerd[1726]: time="2025-07-07T06:13:43.528014168Z" level=info msg="Start recovering state" Jul 7 06:13:43.528200 containerd[1726]: time="2025-07-07T06:13:43.528162747Z" level=info msg="Start event monitor" Jul 7 06:13:43.528200 containerd[1726]: time="2025-07-07T06:13:43.528177866Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:13:43.528841 containerd[1726]: time="2025-07-07T06:13:43.528187349Z" level=info msg="Start streaming server" Jul 7 06:13:43.528841 containerd[1726]: time="2025-07-07T06:13:43.528594697Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:13:43.528841 containerd[1726]: time="2025-07-07T06:13:43.528604586Z" level=info msg="runtime interface starting up..." Jul 7 06:13:43.528841 containerd[1726]: time="2025-07-07T06:13:43.528614978Z" level=info msg="starting plugins..." Jul 7 06:13:43.528841 containerd[1726]: time="2025-07-07T06:13:43.528629789Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:13:43.528995 containerd[1726]: time="2025-07-07T06:13:43.528973193Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:13:43.529196 containerd[1726]: time="2025-07-07T06:13:43.529018385Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:13:43.529229 containerd[1726]: time="2025-07-07T06:13:43.529219365Z" level=info msg="containerd successfully booted in 1.646067s" Jul 7 06:13:43.529810 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:13:43.533305 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:13:43.535797 systemd[1]: Startup finished in 2.712s (kernel) + 19.778s (initrd) + 14.904s (userspace) = 37.395s. Jul 7 06:13:44.008788 waagent[1811]: 2025-07-07T06:13:44.008727Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 7 06:13:44.009114 waagent[1811]: 2025-07-07T06:13:44.009078Z INFO Daemon Daemon OS: flatcar 4372.0.1 Jul 7 06:13:44.011159 waagent[1811]: 2025-07-07T06:13:44.010326Z INFO Daemon Daemon Python: 3.11.12 Jul 7 06:13:44.012130 waagent[1811]: 2025-07-07T06:13:44.012090Z INFO Daemon Daemon Run daemon Jul 7 06:13:44.013274 waagent[1811]: 2025-07-07T06:13:44.013071Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.0.1' Jul 7 06:13:44.015194 waagent[1811]: 2025-07-07T06:13:44.015144Z INFO Daemon Daemon Using waagent for provisioning Jul 7 06:13:44.024908 waagent[1811]: 2025-07-07T06:13:44.015729Z INFO Daemon Daemon Activate resource disk Jul 7 06:13:44.024908 waagent[1811]: 2025-07-07T06:13:44.015916Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 7 06:13:44.024908 waagent[1811]: 2025-07-07T06:13:44.017274Z INFO Daemon Daemon Found device: None Jul 7 06:13:44.024908 waagent[1811]: 2025-07-07T06:13:44.017386Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 7 06:13:44.024908 waagent[1811]: 2025-07-07T06:13:44.017439Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 7 06:13:44.024908 waagent[1811]: 2025-07-07T06:13:44.018111Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 06:13:44.024908 waagent[1811]: 2025-07-07T06:13:44.018396Z INFO Daemon Daemon Running default provisioning handler Jul 7 06:13:44.024908 waagent[1811]: 2025-07-07T06:13:44.024039Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 7 06:13:44.026858 waagent[1811]: 2025-07-07T06:13:44.025939Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 7 06:13:44.026858 waagent[1811]: 2025-07-07T06:13:44.026046Z INFO Daemon Daemon cloud-init is enabled: False Jul 7 06:13:44.026858 waagent[1811]: 2025-07-07T06:13:44.026092Z INFO Daemon Daemon Copying ovf-env.xml Jul 7 06:13:44.074996 login[1818]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 7 06:13:44.076251 login[1819]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 06:13:44.083143 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:13:44.084575 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:13:44.086450 systemd-logind[1703]: New session 1 of user core. Jul 7 06:13:44.107275 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:13:44.109084 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:13:44.109675 waagent[1811]: 2025-07-07T06:13:44.109594Z INFO Daemon Daemon Successfully mounted dvd Jul 7 06:13:44.118497 (systemd)[1873]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:13:44.120396 systemd-logind[1703]: New session c1 of user core. Jul 7 06:13:44.149567 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 7 06:13:44.151498 waagent[1811]: 2025-07-07T06:13:44.151457Z INFO Daemon Daemon Detect protocol endpoint Jul 7 06:13:44.153442 waagent[1811]: 2025-07-07T06:13:44.152039Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 06:13:44.153442 waagent[1811]: 2025-07-07T06:13:44.152388Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 7 06:13:44.153442 waagent[1811]: 2025-07-07T06:13:44.152691Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 7 06:13:44.153442 waagent[1811]: 2025-07-07T06:13:44.152830Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 7 06:13:44.153442 waagent[1811]: 2025-07-07T06:13:44.153082Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 7 06:13:44.164728 waagent[1811]: 2025-07-07T06:13:44.163501Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 7 06:13:44.166403 waagent[1811]: 2025-07-07T06:13:44.166373Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 7 06:13:44.168235 waagent[1811]: 2025-07-07T06:13:44.168147Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 7 06:13:44.244774 waagent[1811]: 2025-07-07T06:13:44.244726Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 7 06:13:44.245004 waagent[1811]: 2025-07-07T06:13:44.244977Z INFO Daemon Daemon Forcing an update of the goal state. Jul 7 06:13:44.258264 waagent[1811]: 2025-07-07T06:13:44.258229Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 06:13:44.266756 systemd[1873]: Queued start job for default target default.target. Jul 7 06:13:44.277311 waagent[1811]: 2025-07-07T06:13:44.277284Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 7 06:13:44.279116 waagent[1811]: 2025-07-07T06:13:44.278696Z INFO Daemon Jul 7 06:13:44.279847 systemd[1873]: Created slice app.slice - User Application Slice. Jul 7 06:13:44.279937 systemd[1873]: Reached target paths.target - Paths. Jul 7 06:13:44.280842 waagent[1811]: 2025-07-07T06:13:44.280096Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ab5f23e7-5734-4ea4-b0d3-96e6e187e436 eTag: 2058386544938371733 source: Fabric] Jul 7 06:13:44.279965 systemd[1873]: Reached target timers.target - Timers. Jul 7 06:13:44.282786 systemd[1873]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:13:44.284028 waagent[1811]: 2025-07-07T06:13:44.283988Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 7 06:13:44.286722 waagent[1811]: 2025-07-07T06:13:44.286676Z INFO Daemon Jul 7 06:13:44.287883 waagent[1811]: 2025-07-07T06:13:44.287842Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 7 06:13:44.290487 systemd[1873]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:13:44.290597 systemd[1873]: Reached target sockets.target - Sockets. Jul 7 06:13:44.290842 systemd[1873]: Reached target basic.target - Basic System. Jul 7 06:13:44.290954 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:13:44.291054 systemd[1873]: Reached target default.target - Main User Target. Jul 7 06:13:44.291116 systemd[1873]: Startup finished in 166ms. Jul 7 06:13:44.296822 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:13:44.298475 waagent[1811]: 2025-07-07T06:13:44.298378Z INFO Daemon Daemon Downloading artifacts profile blob Jul 7 06:13:44.477991 waagent[1811]: 2025-07-07T06:13:44.477950Z INFO Daemon Downloaded certificate {'thumbprint': 'C5FA5AF5F3E64232372811AFA9A94403DC73E963', 'hasPrivateKey': True} Jul 7 06:13:44.481286 waagent[1811]: 2025-07-07T06:13:44.480063Z INFO Daemon Fetch goal state completed Jul 7 06:13:44.530957 waagent[1811]: 2025-07-07T06:13:44.530873Z INFO Daemon Daemon Starting provisioning Jul 7 06:13:44.532720 waagent[1811]: 2025-07-07T06:13:44.531269Z INFO Daemon Daemon Handle ovf-env.xml. Jul 7 06:13:44.532720 waagent[1811]: 2025-07-07T06:13:44.531332Z INFO Daemon Daemon Set hostname [ci-4372.0.1-a-04b45ab1a6] Jul 7 06:13:44.826538 waagent[1811]: 2025-07-07T06:13:44.826488Z INFO Daemon Daemon Publish hostname [ci-4372.0.1-a-04b45ab1a6] Jul 7 06:13:44.828216 waagent[1811]: 2025-07-07T06:13:44.827249Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 7 06:13:44.828216 waagent[1811]: 2025-07-07T06:13:44.827627Z INFO Daemon Daemon Primary interface is [eth0] Jul 7 06:13:44.834126 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:44.834133 systemd-networkd[1357]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:13:44.834155 systemd-networkd[1357]: eth0: DHCP lease lost Jul 7 06:13:44.834919 waagent[1811]: 2025-07-07T06:13:44.834880Z INFO Daemon Daemon Create user account if not exists Jul 7 06:13:44.836136 waagent[1811]: 2025-07-07T06:13:44.836103Z INFO Daemon Daemon User core already exists, skip useradd Jul 7 06:13:44.837422 waagent[1811]: 2025-07-07T06:13:44.836519Z INFO Daemon Daemon Configure sudoer Jul 7 06:13:44.853733 systemd-networkd[1357]: eth0: DHCPv4 address 10.200.4.33/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jul 7 06:13:46.309470 login[1818]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 06:13:46.314383 systemd-logind[1703]: New session 2 of user core. Jul 7 06:13:46.320840 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:13:46.470245 waagent[1811]: 2025-07-07T06:13:46.470163Z INFO Daemon Daemon Configure sshd Jul 7 06:13:46.566916 waagent[1811]: 2025-07-07T06:13:46.566804Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 7 06:13:46.571298 waagent[1811]: 2025-07-07T06:13:46.571225Z INFO Daemon Daemon Deploy ssh public key. Jul 7 06:13:46.658640 waagent[1811]: 2025-07-07T06:13:46.658611Z INFO Daemon Daemon Provisioning complete Jul 7 06:13:46.675288 waagent[1811]: 2025-07-07T06:13:46.675261Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 7 06:13:46.676227 waagent[1811]: 2025-07-07T06:13:46.675674Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 7 06:13:46.676227 waagent[1811]: 2025-07-07T06:13:46.675911Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 7 06:13:46.764336 waagent[1915]: 2025-07-07T06:13:46.764281Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 7 06:13:46.764529 waagent[1915]: 2025-07-07T06:13:46.764362Z INFO ExtHandler ExtHandler OS: flatcar 4372.0.1 Jul 7 06:13:46.764529 waagent[1915]: 2025-07-07T06:13:46.764398Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 7 06:13:46.764529 waagent[1915]: 2025-07-07T06:13:46.764433Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jul 7 06:13:47.134344 waagent[1915]: 2025-07-07T06:13:47.134254Z INFO ExtHandler ExtHandler Distro: flatcar-4372.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 7 06:13:47.134442 waagent[1915]: 2025-07-07T06:13:47.134404Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 06:13:47.134499 waagent[1915]: 2025-07-07T06:13:47.134469Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 06:13:47.142646 waagent[1915]: 2025-07-07T06:13:47.142599Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 06:13:47.150695 waagent[1915]: 2025-07-07T06:13:47.150671Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 7 06:13:47.151008 waagent[1915]: 2025-07-07T06:13:47.150984Z INFO ExtHandler Jul 7 06:13:47.151044 waagent[1915]: 2025-07-07T06:13:47.151032Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: deef3931-af60-48e2-b96a-bad0f3f74080 eTag: 2058386544938371733 source: Fabric] Jul 7 06:13:47.151225 waagent[1915]: 2025-07-07T06:13:47.151207Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 7 06:13:47.151521 waagent[1915]: 2025-07-07T06:13:47.151502Z INFO ExtHandler Jul 7 06:13:47.151551 waagent[1915]: 2025-07-07T06:13:47.151537Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 7 06:13:47.155472 waagent[1915]: 2025-07-07T06:13:47.155450Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 7 06:13:47.479855 waagent[1915]: 2025-07-07T06:13:47.479814Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C5FA5AF5F3E64232372811AFA9A94403DC73E963', 'hasPrivateKey': True} Jul 7 06:13:47.480169 waagent[1915]: 2025-07-07T06:13:47.480145Z INFO ExtHandler Fetch goal state completed Jul 7 06:13:47.491503 waagent[1915]: 2025-07-07T06:13:47.491464Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 7 06:13:47.495284 waagent[1915]: 2025-07-07T06:13:47.495241Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1915 Jul 7 06:13:47.495385 waagent[1915]: 2025-07-07T06:13:47.495364Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 7 06:13:47.495594 waagent[1915]: 2025-07-07T06:13:47.495575Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 7 06:13:47.496476 waagent[1915]: 2025-07-07T06:13:47.496444Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 7 06:13:47.496767 waagent[1915]: 2025-07-07T06:13:47.496743Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 7 06:13:47.496873 waagent[1915]: 2025-07-07T06:13:47.496855Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 7 06:13:47.497209 waagent[1915]: 2025-07-07T06:13:47.497190Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 7 06:13:47.831335 waagent[1915]: 2025-07-07T06:13:47.831264Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 7 06:13:47.831655 waagent[1915]: 2025-07-07T06:13:47.831441Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 7 06:13:47.836583 waagent[1915]: 2025-07-07T06:13:47.836472Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 7 06:13:47.841239 systemd[1]: Reload requested from client PID 1933 ('systemctl') (unit waagent.service)... Jul 7 06:13:47.841249 systemd[1]: Reloading... Jul 7 06:13:47.902723 zram_generator::config[1974]: No configuration found. Jul 7 06:13:47.974609 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:13:48.053211 systemd[1]: Reloading finished in 211 ms. Jul 7 06:13:48.077917 waagent[1915]: 2025-07-07T06:13:48.075536Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 7 06:13:48.077917 waagent[1915]: 2025-07-07T06:13:48.075619Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 7 06:13:48.516913 waagent[1915]: 2025-07-07T06:13:48.516866Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 7 06:13:48.517122 waagent[1915]: 2025-07-07T06:13:48.517101Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 7 06:13:48.517793 waagent[1915]: 2025-07-07T06:13:48.517749Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 7 06:13:48.517836 waagent[1915]: 2025-07-07T06:13:48.517796Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 06:13:48.517878 waagent[1915]: 2025-07-07T06:13:48.517854Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 06:13:48.518363 waagent[1915]: 2025-07-07T06:13:48.518340Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 7 06:13:48.518415 waagent[1915]: 2025-07-07T06:13:48.518383Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 7 06:13:48.518603 waagent[1915]: 2025-07-07T06:13:48.518571Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 7 06:13:48.518885 waagent[1915]: 2025-07-07T06:13:48.518864Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 7 06:13:48.519000 waagent[1915]: 2025-07-07T06:13:48.518968Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 06:13:48.519172 waagent[1915]: 2025-07-07T06:13:48.519154Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 7 06:13:48.519172 waagent[1915]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 7 06:13:48.519172 waagent[1915]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jul 7 06:13:48.519172 waagent[1915]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 7 06:13:48.519172 waagent[1915]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 7 06:13:48.519172 waagent[1915]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 06:13:48.519172 waagent[1915]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 06:13:48.519395 waagent[1915]: 2025-07-07T06:13:48.519369Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 06:13:48.519607 waagent[1915]: 2025-07-07T06:13:48.519549Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 7 06:13:48.519689 waagent[1915]: 2025-07-07T06:13:48.519632Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 7 06:13:48.519818 waagent[1915]: 2025-07-07T06:13:48.519798Z INFO EnvHandler ExtHandler Configure routes Jul 7 06:13:48.520015 waagent[1915]: 2025-07-07T06:13:48.519951Z INFO EnvHandler ExtHandler Gateway:None Jul 7 06:13:48.520114 waagent[1915]: 2025-07-07T06:13:48.520095Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 7 06:13:48.520800 waagent[1915]: 2025-07-07T06:13:48.520777Z INFO EnvHandler ExtHandler Routes:None Jul 7 06:13:48.527093 waagent[1915]: 2025-07-07T06:13:48.527064Z INFO ExtHandler ExtHandler Jul 7 06:13:48.527157 waagent[1915]: 2025-07-07T06:13:48.527115Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5a357cf2-4530-40b5-b315-4c3dc8c40837 correlation fcc2332c-337c-4019-9e6c-b9f32048c7ab created: 2025-07-07T06:12:40.389725Z] Jul 7 06:13:48.527368 waagent[1915]: 2025-07-07T06:13:48.527347Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 7 06:13:48.527717 waagent[1915]: 2025-07-07T06:13:48.527689Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 7 06:13:48.553364 waagent[1915]: 2025-07-07T06:13:48.553332Z INFO MonitorHandler ExtHandler Network interfaces: Jul 7 06:13:48.553364 waagent[1915]: Executing ['ip', '-a', '-o', 'link']: Jul 7 06:13:48.553364 waagent[1915]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 7 06:13:48.553364 waagent[1915]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:78:eb:09 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jul 7 06:13:48.553364 waagent[1915]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:78:eb:09 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jul 7 06:13:48.553364 waagent[1915]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 7 06:13:48.553364 waagent[1915]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 7 06:13:48.553364 waagent[1915]: 2: eth0 inet 10.200.4.33/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 7 06:13:48.553364 waagent[1915]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 7 06:13:48.553364 waagent[1915]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 7 06:13:48.553364 waagent[1915]: 2: eth0 inet6 fe80::7e1e:52ff:fe78:eb09/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 06:13:48.553364 waagent[1915]: 3: enP30832s1 inet6 fe80::7e1e:52ff:fe78:eb09/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 06:13:48.556194 waagent[1915]: 2025-07-07T06:13:48.555836Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 7 06:13:48.556194 waagent[1915]: Try `iptables -h' or 'iptables --help' for more information.) Jul 7 06:13:48.556194 waagent[1915]: 2025-07-07T06:13:48.556142Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 55AE71F5-630F-49CE-BCCC-811C3BF61D0F;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 7 06:13:48.591498 waagent[1915]: 2025-07-07T06:13:48.591457Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 7 06:13:48.591498 waagent[1915]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:13:48.591498 waagent[1915]: pkts bytes target prot opt in out source destination Jul 7 06:13:48.591498 waagent[1915]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:13:48.591498 waagent[1915]: pkts bytes target prot opt in out source destination Jul 7 06:13:48.591498 waagent[1915]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:13:48.591498 waagent[1915]: pkts bytes target prot opt in out source destination Jul 7 06:13:48.591498 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 06:13:48.591498 waagent[1915]: 3 535 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 06:13:48.591498 waagent[1915]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 06:13:48.594117 waagent[1915]: 2025-07-07T06:13:48.594075Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 7 06:13:48.594117 waagent[1915]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:13:48.594117 waagent[1915]: pkts bytes target prot opt in out source destination Jul 7 06:13:48.594117 waagent[1915]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:13:48.594117 waagent[1915]: pkts bytes target prot opt in out source destination Jul 7 06:13:48.594117 waagent[1915]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:13:48.594117 waagent[1915]: pkts bytes target prot opt in out source destination Jul 7 06:13:48.594117 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 06:13:48.594117 waagent[1915]: 4 587 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 06:13:48.594117 waagent[1915]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 06:13:51.471854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:13:51.473624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:13:57.647395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:13:57.653927 (kubelet)[2069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:13:57.686478 kubelet[2069]: E0707 06:13:57.686449 2069 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:13:57.688867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:13:57.688979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:13:57.689270 systemd[1]: kubelet.service: Consumed 127ms CPU time, 111.1M memory peak. Jul 7 06:14:00.348729 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:14:00.349679 systemd[1]: Started sshd@0-10.200.4.33:22-10.200.16.10:42862.service - OpenSSH per-connection server daemon (10.200.16.10:42862). Jul 7 06:14:01.015047 sshd[2077]: Accepted publickey for core from 10.200.16.10 port 42862 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:14:01.016251 sshd-session[2077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:01.020350 systemd-logind[1703]: New session 3 of user core. Jul 7 06:14:01.025833 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:14:01.542825 systemd[1]: Started sshd@1-10.200.4.33:22-10.200.16.10:51228.service - OpenSSH per-connection server daemon (10.200.16.10:51228). Jul 7 06:14:02.138480 sshd[2082]: Accepted publickey for core from 10.200.16.10 port 51228 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:14:02.139763 sshd-session[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:02.143811 systemd-logind[1703]: New session 4 of user core. Jul 7 06:14:02.153820 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:14:02.558976 sshd[2084]: Connection closed by 10.200.16.10 port 51228 Jul 7 06:14:02.559478 sshd-session[2082]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:02.562573 systemd[1]: sshd@1-10.200.4.33:22-10.200.16.10:51228.service: Deactivated successfully. Jul 7 06:14:02.563940 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:14:02.564481 systemd-logind[1703]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:14:02.565459 systemd-logind[1703]: Removed session 4. Jul 7 06:14:02.667494 systemd[1]: Started sshd@2-10.200.4.33:22-10.200.16.10:51242.service - OpenSSH per-connection server daemon (10.200.16.10:51242). Jul 7 06:14:03.113968 chronyd[1732]: Selected source PHC0 Jul 7 06:14:03.264792 sshd[2090]: Accepted publickey for core from 10.200.16.10 port 51242 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:14:03.265966 sshd-session[2090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:03.269991 systemd-logind[1703]: New session 5 of user core. Jul 7 06:14:03.274836 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:14:03.687746 sshd[2092]: Connection closed by 10.200.16.10 port 51242 Jul 7 06:14:03.688204 sshd-session[2090]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:03.691358 systemd[1]: sshd@2-10.200.4.33:22-10.200.16.10:51242.service: Deactivated successfully. Jul 7 06:14:03.692801 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:14:03.693323 systemd-logind[1703]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:14:03.694368 systemd-logind[1703]: Removed session 5. Jul 7 06:14:03.793484 systemd[1]: Started sshd@3-10.200.4.33:22-10.200.16.10:51252.service - OpenSSH per-connection server daemon (10.200.16.10:51252). Jul 7 06:14:04.394671 sshd[2098]: Accepted publickey for core from 10.200.16.10 port 51252 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:14:04.395899 sshd-session[2098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:04.400027 systemd-logind[1703]: New session 6 of user core. Jul 7 06:14:04.405844 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:14:04.815155 sshd[2100]: Connection closed by 10.200.16.10 port 51252 Jul 7 06:14:04.815649 sshd-session[2098]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:04.818342 systemd[1]: sshd@3-10.200.4.33:22-10.200.16.10:51252.service: Deactivated successfully. Jul 7 06:14:04.820149 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:14:04.820926 systemd-logind[1703]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:14:04.821857 systemd-logind[1703]: Removed session 6. Jul 7 06:14:04.921476 systemd[1]: Started sshd@4-10.200.4.33:22-10.200.16.10:51258.service - OpenSSH per-connection server daemon (10.200.16.10:51258). Jul 7 06:14:05.519439 sshd[2106]: Accepted publickey for core from 10.200.16.10 port 51258 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:14:05.520648 sshd-session[2106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:05.524833 systemd-logind[1703]: New session 7 of user core. Jul 7 06:14:05.529839 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:14:05.947039 sudo[2109]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:14:05.947246 sudo[2109]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:14:05.975448 sudo[2109]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:06.069502 sshd[2108]: Connection closed by 10.200.16.10 port 51258 Jul 7 06:14:06.070199 sshd-session[2106]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:06.073110 systemd[1]: sshd@4-10.200.4.33:22-10.200.16.10:51258.service: Deactivated successfully. Jul 7 06:14:06.074499 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:14:06.075562 systemd-logind[1703]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:14:06.076465 systemd-logind[1703]: Removed session 7. Jul 7 06:14:06.184589 systemd[1]: Started sshd@5-10.200.4.33:22-10.200.16.10:51272.service - OpenSSH per-connection server daemon (10.200.16.10:51272). Jul 7 06:14:06.787548 sshd[2115]: Accepted publickey for core from 10.200.16.10 port 51272 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:14:06.788761 sshd-session[2115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:06.792664 systemd-logind[1703]: New session 8 of user core. Jul 7 06:14:06.798842 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:14:07.115268 sudo[2119]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:14:07.115461 sudo[2119]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:14:07.121030 sudo[2119]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:07.124285 sudo[2118]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:14:07.124466 sudo[2118]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:14:07.130853 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:14:07.158451 augenrules[2141]: No rules Jul 7 06:14:07.159307 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:14:07.159482 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:14:07.160130 sudo[2118]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:07.254350 sshd[2117]: Connection closed by 10.200.16.10 port 51272 Jul 7 06:14:07.254766 sshd-session[2115]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:07.257487 systemd[1]: sshd@5-10.200.4.33:22-10.200.16.10:51272.service: Deactivated successfully. Jul 7 06:14:07.258686 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:14:07.259250 systemd-logind[1703]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:14:07.260135 systemd-logind[1703]: Removed session 8. Jul 7 06:14:07.366557 systemd[1]: Started sshd@6-10.200.4.33:22-10.200.16.10:51288.service - OpenSSH per-connection server daemon (10.200.16.10:51288). Jul 7 06:14:07.721612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:14:07.723165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:07.963531 sshd[2150]: Accepted publickey for core from 10.200.16.10 port 51288 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:14:07.964510 sshd-session[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:07.968069 systemd-logind[1703]: New session 9 of user core. Jul 7 06:14:07.973829 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:14:08.291144 sudo[2156]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:14:08.291331 sudo[2156]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:14:10.862625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:10.870932 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:14:10.902855 kubelet[2178]: E0707 06:14:10.902809 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:14:10.904126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:14:10.904236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:14:10.904547 systemd[1]: kubelet.service: Consumed 115ms CPU time, 108.9M memory peak. Jul 7 06:14:12.207892 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:14:12.220951 (dockerd)[2186]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:14:13.863742 dockerd[2186]: time="2025-07-07T06:14:13.863610954Z" level=info msg="Starting up" Jul 7 06:14:13.864720 dockerd[2186]: time="2025-07-07T06:14:13.864433383Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:14:15.674092 dockerd[2186]: time="2025-07-07T06:14:15.673856768Z" level=info msg="Loading containers: start." Jul 7 06:14:15.741725 kernel: Initializing XFRM netlink socket Jul 7 06:14:16.141356 systemd-networkd[1357]: docker0: Link UP Jul 7 06:14:16.511629 dockerd[2186]: time="2025-07-07T06:14:16.511581632Z" level=info msg="Loading containers: done." Jul 7 06:14:16.523621 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2384196270-merged.mount: Deactivated successfully. Jul 7 06:14:17.209123 dockerd[2186]: time="2025-07-07T06:14:17.209061971Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:14:17.209501 dockerd[2186]: time="2025-07-07T06:14:17.209175755Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:14:17.209501 dockerd[2186]: time="2025-07-07T06:14:17.209291219Z" level=info msg="Initializing buildkit" Jul 7 06:14:17.669709 dockerd[2186]: time="2025-07-07T06:14:17.669655228Z" level=info msg="Completed buildkit initialization" Jul 7 06:14:17.676327 dockerd[2186]: time="2025-07-07T06:14:17.676253227Z" level=info msg="Daemon has completed initialization" Jul 7 06:14:17.676458 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:14:17.676646 dockerd[2186]: time="2025-07-07T06:14:17.676372592Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:14:18.651792 containerd[1726]: time="2025-07-07T06:14:18.651755915Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 06:14:20.971517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 06:14:20.972876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:23.445478 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 7 06:14:25.150241 update_engine[1704]: I20250707 06:14:25.150151 1704 update_attempter.cc:509] Updating boot flags... Jul 7 06:14:26.174523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:26.182893 (kubelet)[2430]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:14:26.216877 kubelet[2430]: E0707 06:14:26.216847 2430 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:14:26.218191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:14:26.218318 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:14:26.218618 systemd[1]: kubelet.service: Consumed 120ms CPU time, 108.7M memory peak. Jul 7 06:14:27.305911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622148742.mount: Deactivated successfully. Jul 7 06:14:28.836514 containerd[1726]: time="2025-07-07T06:14:28.836472901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:28.840510 containerd[1726]: time="2025-07-07T06:14:28.840484370Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jul 7 06:14:28.845148 containerd[1726]: time="2025-07-07T06:14:28.845115025Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:28.850733 containerd[1726]: time="2025-07-07T06:14:28.850685493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:28.852720 containerd[1726]: time="2025-07-07T06:14:28.852675458Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 10.200888517s" Jul 7 06:14:28.852796 containerd[1726]: time="2025-07-07T06:14:28.852735126Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 7 06:14:28.854191 containerd[1726]: time="2025-07-07T06:14:28.854061422Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 06:14:30.175766 containerd[1726]: time="2025-07-07T06:14:30.175726471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:30.180127 containerd[1726]: time="2025-07-07T06:14:30.180097099Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jul 7 06:14:30.184620 containerd[1726]: time="2025-07-07T06:14:30.184581394Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:30.190574 containerd[1726]: time="2025-07-07T06:14:30.190536614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:30.191164 containerd[1726]: time="2025-07-07T06:14:30.191050107Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.336961688s" Jul 7 06:14:30.191164 containerd[1726]: time="2025-07-07T06:14:30.191075871Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 7 06:14:30.191531 containerd[1726]: time="2025-07-07T06:14:30.191518414Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 06:14:31.371538 containerd[1726]: time="2025-07-07T06:14:31.371496419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:31.377422 containerd[1726]: time="2025-07-07T06:14:31.377396301Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jul 7 06:14:31.385911 containerd[1726]: time="2025-07-07T06:14:31.385873309Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:31.391792 containerd[1726]: time="2025-07-07T06:14:31.391751731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:31.392408 containerd[1726]: time="2025-07-07T06:14:31.392283916Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.200692577s" Jul 7 06:14:31.392408 containerd[1726]: time="2025-07-07T06:14:31.392309504Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 7 06:14:31.392877 containerd[1726]: time="2025-07-07T06:14:31.392816285Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 06:14:32.571147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3021917298.mount: Deactivated successfully. Jul 7 06:14:32.906473 containerd[1726]: time="2025-07-07T06:14:32.906397954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:32.912845 containerd[1726]: time="2025-07-07T06:14:32.912800028Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jul 7 06:14:32.916774 containerd[1726]: time="2025-07-07T06:14:32.916735471Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:32.921156 containerd[1726]: time="2025-07-07T06:14:32.921115033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:32.921538 containerd[1726]: time="2025-07-07T06:14:32.921425659Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.528584066s" Jul 7 06:14:32.921538 containerd[1726]: time="2025-07-07T06:14:32.921451745Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 7 06:14:32.921942 containerd[1726]: time="2025-07-07T06:14:32.921908652Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:14:33.678494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4251026636.mount: Deactivated successfully. Jul 7 06:14:34.742871 containerd[1726]: time="2025-07-07T06:14:34.742834478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:34.749565 containerd[1726]: time="2025-07-07T06:14:34.749540425Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 7 06:14:34.755126 containerd[1726]: time="2025-07-07T06:14:34.755088647Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:34.759902 containerd[1726]: time="2025-07-07T06:14:34.759864817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:34.760517 containerd[1726]: time="2025-07-07T06:14:34.760388476Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.83844245s" Jul 7 06:14:34.760517 containerd[1726]: time="2025-07-07T06:14:34.760415591Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 06:14:34.760968 containerd[1726]: time="2025-07-07T06:14:34.760942294Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:14:35.415621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1163583986.mount: Deactivated successfully. Jul 7 06:14:35.439115 containerd[1726]: time="2025-07-07T06:14:35.439084953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:35.445182 containerd[1726]: time="2025-07-07T06:14:35.445155545Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 7 06:14:35.452278 containerd[1726]: time="2025-07-07T06:14:35.452243316Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:35.457150 containerd[1726]: time="2025-07-07T06:14:35.457115520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:35.457730 containerd[1726]: time="2025-07-07T06:14:35.457485269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 696.510397ms" Jul 7 06:14:35.457730 containerd[1726]: time="2025-07-07T06:14:35.457509768Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:14:35.457946 containerd[1726]: time="2025-07-07T06:14:35.457933357Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 06:14:36.147914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3881050686.mount: Deactivated successfully. Jul 7 06:14:36.221488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 06:14:36.222649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:36.680015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:36.691913 (kubelet)[2575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:14:36.720250 kubelet[2575]: E0707 06:14:36.720215 2575 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:14:36.721670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:14:36.721819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:14:36.722177 systemd[1]: kubelet.service: Consumed 115ms CPU time, 110.3M memory peak. Jul 7 06:14:38.320503 containerd[1726]: time="2025-07-07T06:14:38.320421662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:38.341864 containerd[1726]: time="2025-07-07T06:14:38.341808578Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 7 06:14:38.345949 containerd[1726]: time="2025-07-07T06:14:38.345920986Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:38.352212 containerd[1726]: time="2025-07-07T06:14:38.352172739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:38.352992 containerd[1726]: time="2025-07-07T06:14:38.352957770Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.894961432s" Jul 7 06:14:38.352992 containerd[1726]: time="2025-07-07T06:14:38.352985125Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 06:14:40.472592 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:40.472743 systemd[1]: kubelet.service: Consumed 115ms CPU time, 110.3M memory peak. Jul 7 06:14:40.474483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:40.499957 systemd[1]: Reload requested from client PID 2661 ('systemctl') (unit session-9.scope)... Jul 7 06:14:40.499969 systemd[1]: Reloading... Jul 7 06:14:40.569730 zram_generator::config[2710]: No configuration found. Jul 7 06:14:40.650178 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:14:40.739773 systemd[1]: Reloading finished in 239 ms. Jul 7 06:14:40.864387 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:14:40.864457 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:14:40.864654 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:40.864745 systemd[1]: kubelet.service: Consumed 72ms CPU time, 83.2M memory peak. Jul 7 06:14:40.866567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:41.349734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:41.352793 (kubelet)[2774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:14:41.384624 kubelet[2774]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:41.384624 kubelet[2774]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:14:41.384624 kubelet[2774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:41.384861 kubelet[2774]: I0707 06:14:41.384730 2774 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:14:41.495166 kubelet[2774]: I0707 06:14:41.495143 2774 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:14:41.495166 kubelet[2774]: I0707 06:14:41.495160 2774 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:14:41.495343 kubelet[2774]: I0707 06:14:41.495331 2774 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:14:41.520812 kubelet[2774]: E0707 06:14:41.520786 2774 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:41.522155 kubelet[2774]: I0707 06:14:41.522021 2774 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:14:41.527318 kubelet[2774]: I0707 06:14:41.527303 2774 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:14:41.530474 kubelet[2774]: I0707 06:14:41.530451 2774 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:14:41.531046 kubelet[2774]: I0707 06:14:41.531030 2774 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:14:41.531174 kubelet[2774]: I0707 06:14:41.531149 2774 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:14:41.531311 kubelet[2774]: I0707 06:14:41.531172 2774 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-04b45ab1a6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:14:41.531416 kubelet[2774]: I0707 06:14:41.531318 2774 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:14:41.531416 kubelet[2774]: I0707 06:14:41.531327 2774 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:14:41.531416 kubelet[2774]: I0707 06:14:41.531401 2774 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:41.533556 kubelet[2774]: I0707 06:14:41.533541 2774 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:14:41.533603 kubelet[2774]: I0707 06:14:41.533564 2774 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:14:41.533603 kubelet[2774]: I0707 06:14:41.533589 2774 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:14:41.533603 kubelet[2774]: I0707 06:14:41.533602 2774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:14:41.539642 kubelet[2774]: W0707 06:14:41.539111 2774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-04b45ab1a6&limit=500&resourceVersion=0": dial tcp 10.200.4.33:6443: connect: connection refused Jul 7 06:14:41.539642 kubelet[2774]: E0707 06:14:41.539163 2774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-04b45ab1a6&limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:41.539642 kubelet[2774]: W0707 06:14:41.539422 2774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.33:6443: connect: connection refused Jul 7 06:14:41.539642 kubelet[2774]: E0707 06:14:41.539449 2774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:41.539872 kubelet[2774]: I0707 06:14:41.539862 2774 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:14:41.540201 kubelet[2774]: I0707 06:14:41.540189 2774 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:14:41.540655 kubelet[2774]: W0707 06:14:41.540641 2774 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:14:41.542527 kubelet[2774]: I0707 06:14:41.542397 2774 server.go:1274] "Started kubelet" Jul 7 06:14:41.543314 kubelet[2774]: I0707 06:14:41.542985 2774 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:14:41.546720 kubelet[2774]: I0707 06:14:41.545823 2774 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:14:41.546720 kubelet[2774]: I0707 06:14:41.546437 2774 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:14:41.546720 kubelet[2774]: I0707 06:14:41.546653 2774 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:14:41.549640 kubelet[2774]: E0707 06:14:41.548456 2774 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-a-04b45ab1a6.184fe3774964e7e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-a-04b45ab1a6,UID:ci-4372.0.1-a-04b45ab1a6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-a-04b45ab1a6,},FirstTimestamp:2025-07-07 06:14:41.542375393 +0000 UTC m=+0.186777300,LastTimestamp:2025-07-07 06:14:41.542375393 +0000 UTC m=+0.186777300,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-a-04b45ab1a6,}" Jul 7 06:14:41.551359 kubelet[2774]: I0707 06:14:41.551338 2774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:14:41.553937 kubelet[2774]: I0707 06:14:41.553018 2774 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:14:41.555587 kubelet[2774]: E0707 06:14:41.555574 2774 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:14:41.555912 kubelet[2774]: I0707 06:14:41.555850 2774 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:14:41.556138 kubelet[2774]: E0707 06:14:41.556012 2774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-04b45ab1a6\" not found" Jul 7 06:14:41.556270 kubelet[2774]: E0707 06:14:41.556242 2774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-04b45ab1a6?timeout=10s\": dial tcp 10.200.4.33:6443: connect: connection refused" interval="200ms" Jul 7 06:14:41.556466 kubelet[2774]: I0707 06:14:41.556452 2774 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:14:41.556560 kubelet[2774]: I0707 06:14:41.556550 2774 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:14:41.557588 kubelet[2774]: I0707 06:14:41.557576 2774 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:14:41.558061 kubelet[2774]: I0707 06:14:41.558047 2774 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:14:41.558113 kubelet[2774]: I0707 06:14:41.558094 2774 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:14:41.565684 kubelet[2774]: I0707 06:14:41.565664 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:14:41.566494 kubelet[2774]: I0707 06:14:41.566470 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:14:41.566494 kubelet[2774]: I0707 06:14:41.566489 2774 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:14:41.566572 kubelet[2774]: I0707 06:14:41.566502 2774 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:14:41.566572 kubelet[2774]: E0707 06:14:41.566529 2774 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:14:41.571021 kubelet[2774]: W0707 06:14:41.570993 2774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.33:6443: connect: connection refused Jul 7 06:14:41.571088 kubelet[2774]: E0707 06:14:41.571035 2774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:41.571246 kubelet[2774]: W0707 06:14:41.571205 2774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.33:6443: connect: connection refused Jul 7 06:14:41.571246 kubelet[2774]: E0707 06:14:41.571239 2774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:41.579535 kubelet[2774]: I0707 06:14:41.579524 2774 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:14:41.579535 kubelet[2774]: I0707 06:14:41.579534 2774 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:14:41.579632 kubelet[2774]: I0707 06:14:41.579548 2774 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:41.657047 kubelet[2774]: E0707 06:14:41.656728 2774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-04b45ab1a6\" not found" Jul 7 06:14:41.667025 kubelet[2774]: E0707 06:14:41.666997 2774 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:14:41.692520 kubelet[2774]: I0707 06:14:41.692501 2774 policy_none.go:49] "None policy: Start" Jul 7 06:14:41.693090 kubelet[2774]: I0707 06:14:41.693075 2774 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:14:41.693158 kubelet[2774]: I0707 06:14:41.693093 2774 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:14:41.728061 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:14:41.741502 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:14:41.744188 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:14:41.755192 kubelet[2774]: I0707 06:14:41.755179 2774 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:14:41.755324 kubelet[2774]: I0707 06:14:41.755314 2774 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:14:41.755353 kubelet[2774]: I0707 06:14:41.755327 2774 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:14:41.755795 kubelet[2774]: I0707 06:14:41.755598 2774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:14:41.756570 kubelet[2774]: E0707 06:14:41.756550 2774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-04b45ab1a6?timeout=10s\": dial tcp 10.200.4.33:6443: connect: connection refused" interval="400ms" Jul 7 06:14:41.756985 kubelet[2774]: E0707 06:14:41.756970 2774 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-a-04b45ab1a6\" not found" Jul 7 06:14:41.857268 kubelet[2774]: I0707 06:14:41.857239 2774 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:41.857644 kubelet[2774]: E0707 06:14:41.857599 2774 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.33:6443/api/v1/nodes\": dial tcp 10.200.4.33:6443: connect: connection refused" node="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:41.874428 systemd[1]: Created slice kubepods-burstable-pod07a123d679e7737bfd5251cc8b34ce49.slice - libcontainer container kubepods-burstable-pod07a123d679e7737bfd5251cc8b34ce49.slice. Jul 7 06:14:41.893367 systemd[1]: Created slice kubepods-burstable-pod45f5dedff73da6632da2f276681b3462.slice - libcontainer container kubepods-burstable-pod45f5dedff73da6632da2f276681b3462.slice. Jul 7 06:14:41.896063 systemd[1]: Created slice kubepods-burstable-pod528d28dbe06b060fb48a983a78340b08.slice - libcontainer container kubepods-burstable-pod528d28dbe06b060fb48a983a78340b08.slice. Jul 7 06:14:41.960969 kubelet[2774]: I0707 06:14:41.960781 2774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/528d28dbe06b060fb48a983a78340b08-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-04b45ab1a6\" (UID: \"528d28dbe06b060fb48a983a78340b08\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:41.960969 kubelet[2774]: I0707 06:14:41.960816 2774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07a123d679e7737bfd5251cc8b34ce49-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-04b45ab1a6\" (UID: \"07a123d679e7737bfd5251cc8b34ce49\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:41.960969 kubelet[2774]: I0707 06:14:41.960832 2774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07a123d679e7737bfd5251cc8b34ce49-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-04b45ab1a6\" (UID: \"07a123d679e7737bfd5251cc8b34ce49\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:41.960969 kubelet[2774]: I0707 06:14:41.960847 2774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07a123d679e7737bfd5251cc8b34ce49-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-04b45ab1a6\" (UID: \"07a123d679e7737bfd5251cc8b34ce49\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:41.960969 kubelet[2774]: I0707 06:14:41.960862 2774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45f5dedff73da6632da2f276681b3462-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-04b45ab1a6\" (UID: \"45f5dedff73da6632da2f276681b3462\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:41.961102 kubelet[2774]: I0707 06:14:41.960875 2774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45f5dedff73da6632da2f276681b3462-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-04b45ab1a6\" (UID: \"45f5dedff73da6632da2f276681b3462\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:41.961102 kubelet[2774]: I0707 06:14:41.960889 2774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45f5dedff73da6632da2f276681b3462-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-04b45ab1a6\" (UID: \"45f5dedff73da6632da2f276681b3462\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:41.961102 kubelet[2774]: I0707 06:14:41.960904 2774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45f5dedff73da6632da2f276681b3462-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-04b45ab1a6\" (UID: \"45f5dedff73da6632da2f276681b3462\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:41.961102 kubelet[2774]: I0707 06:14:41.960917 2774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/45f5dedff73da6632da2f276681b3462-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-04b45ab1a6\" (UID: \"45f5dedff73da6632da2f276681b3462\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:42.059378 kubelet[2774]: I0707 06:14:42.059351 2774 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:42.059644 kubelet[2774]: E0707 06:14:42.059619 2774 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.33:6443/api/v1/nodes\": dial tcp 10.200.4.33:6443: connect: connection refused" node="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:42.157385 kubelet[2774]: E0707 06:14:42.157345 2774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-04b45ab1a6?timeout=10s\": dial tcp 10.200.4.33:6443: connect: connection refused" interval="800ms" Jul 7 06:14:42.192443 containerd[1726]: time="2025-07-07T06:14:42.192391823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-04b45ab1a6,Uid:07a123d679e7737bfd5251cc8b34ce49,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:42.196068 containerd[1726]: time="2025-07-07T06:14:42.195948892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-04b45ab1a6,Uid:45f5dedff73da6632da2f276681b3462,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:42.198580 containerd[1726]: time="2025-07-07T06:14:42.198558247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-04b45ab1a6,Uid:528d28dbe06b060fb48a983a78340b08,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:42.311480 containerd[1726]: time="2025-07-07T06:14:42.311447666Z" level=info msg="connecting to shim ea6d0c601bc334e0d971375e83380f7b48dbcb89b3975da5897ee42e28ad09ff" address="unix:///run/containerd/s/3a7558e43eab7c2a2ae271eb1dd3467b62134bc33228052e03deefc0bab9b8ef" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:42.322050 containerd[1726]: time="2025-07-07T06:14:42.322007239Z" level=info msg="connecting to shim c943ba64cd8bf151f20872d20c144e740fa2b25ce322585bf8d4f4968ab41bd6" address="unix:///run/containerd/s/02569b05f9413762ae3a1cb4e001b40a3d0ff296207abe781e6e95ddf15544d1" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:42.339975 containerd[1726]: time="2025-07-07T06:14:42.339951759Z" level=info msg="connecting to shim 220e11156405b4309ea24e3b45da1b5091608ceaa9ad591f4f332eb1d295705a" address="unix:///run/containerd/s/3dfb63f10572aa9f17d771c2ea310d5d56d16abaf60e8ab93c00d06ceab30c94" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:42.344968 systemd[1]: Started cri-containerd-ea6d0c601bc334e0d971375e83380f7b48dbcb89b3975da5897ee42e28ad09ff.scope - libcontainer container ea6d0c601bc334e0d971375e83380f7b48dbcb89b3975da5897ee42e28ad09ff. Jul 7 06:14:42.359856 systemd[1]: Started cri-containerd-c943ba64cd8bf151f20872d20c144e740fa2b25ce322585bf8d4f4968ab41bd6.scope - libcontainer container c943ba64cd8bf151f20872d20c144e740fa2b25ce322585bf8d4f4968ab41bd6. Jul 7 06:14:42.373983 systemd[1]: Started cri-containerd-220e11156405b4309ea24e3b45da1b5091608ceaa9ad591f4f332eb1d295705a.scope - libcontainer container 220e11156405b4309ea24e3b45da1b5091608ceaa9ad591f4f332eb1d295705a. Jul 7 06:14:42.415873 containerd[1726]: time="2025-07-07T06:14:42.415775812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-04b45ab1a6,Uid:07a123d679e7737bfd5251cc8b34ce49,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea6d0c601bc334e0d971375e83380f7b48dbcb89b3975da5897ee42e28ad09ff\"" Jul 7 06:14:42.421732 containerd[1726]: time="2025-07-07T06:14:42.421267914Z" level=info msg="CreateContainer within sandbox \"ea6d0c601bc334e0d971375e83380f7b48dbcb89b3975da5897ee42e28ad09ff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:14:42.437893 containerd[1726]: time="2025-07-07T06:14:42.437871876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-04b45ab1a6,Uid:45f5dedff73da6632da2f276681b3462,Namespace:kube-system,Attempt:0,} returns sandbox id \"c943ba64cd8bf151f20872d20c144e740fa2b25ce322585bf8d4f4968ab41bd6\"" Jul 7 06:14:42.439240 containerd[1726]: time="2025-07-07T06:14:42.439218958Z" level=info msg="CreateContainer within sandbox \"c943ba64cd8bf151f20872d20c144e740fa2b25ce322585bf8d4f4968ab41bd6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:14:42.446834 containerd[1726]: time="2025-07-07T06:14:42.446817205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-04b45ab1a6,Uid:528d28dbe06b060fb48a983a78340b08,Namespace:kube-system,Attempt:0,} returns sandbox id \"220e11156405b4309ea24e3b45da1b5091608ceaa9ad591f4f332eb1d295705a\"" Jul 7 06:14:42.448352 containerd[1726]: time="2025-07-07T06:14:42.448030329Z" level=info msg="CreateContainer within sandbox \"220e11156405b4309ea24e3b45da1b5091608ceaa9ad591f4f332eb1d295705a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:14:42.455919 containerd[1726]: time="2025-07-07T06:14:42.455897555Z" level=info msg="Container 93c60bd88ad2919389f83f029cbc4a658a632db7fd0b7399716b2273bc18eb38: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:42.461429 kubelet[2774]: I0707 06:14:42.461414 2774 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:42.461686 kubelet[2774]: E0707 06:14:42.461661 2774 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.33:6443/api/v1/nodes\": dial tcp 10.200.4.33:6443: connect: connection refused" node="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:42.496221 kubelet[2774]: W0707 06:14:42.496162 2774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-04b45ab1a6&limit=500&resourceVersion=0": dial tcp 10.200.4.33:6443: connect: connection refused Jul 7 06:14:42.496299 kubelet[2774]: E0707 06:14:42.496231 2774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-04b45ab1a6&limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:42.513136 containerd[1726]: time="2025-07-07T06:14:42.513111380Z" level=info msg="CreateContainer within sandbox \"ea6d0c601bc334e0d971375e83380f7b48dbcb89b3975da5897ee42e28ad09ff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"93c60bd88ad2919389f83f029cbc4a658a632db7fd0b7399716b2273bc18eb38\"" Jul 7 06:14:42.513540 containerd[1726]: time="2025-07-07T06:14:42.513522085Z" level=info msg="StartContainer for \"93c60bd88ad2919389f83f029cbc4a658a632db7fd0b7399716b2273bc18eb38\"" Jul 7 06:14:42.514406 containerd[1726]: time="2025-07-07T06:14:42.514376406Z" level=info msg="connecting to shim 93c60bd88ad2919389f83f029cbc4a658a632db7fd0b7399716b2273bc18eb38" address="unix:///run/containerd/s/3a7558e43eab7c2a2ae271eb1dd3467b62134bc33228052e03deefc0bab9b8ef" protocol=ttrpc version=3 Jul 7 06:14:42.517246 containerd[1726]: time="2025-07-07T06:14:42.517219545Z" level=info msg="Container 355ffb463fe9471f3180a26a0ee09ffc0293e058670eb6ee0e3cee20b977f0ea: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:42.529845 systemd[1]: Started cri-containerd-93c60bd88ad2919389f83f029cbc4a658a632db7fd0b7399716b2273bc18eb38.scope - libcontainer container 93c60bd88ad2919389f83f029cbc4a658a632db7fd0b7399716b2273bc18eb38. Jul 7 06:14:42.538218 containerd[1726]: time="2025-07-07T06:14:42.538161402Z" level=info msg="CreateContainer within sandbox \"c943ba64cd8bf151f20872d20c144e740fa2b25ce322585bf8d4f4968ab41bd6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"355ffb463fe9471f3180a26a0ee09ffc0293e058670eb6ee0e3cee20b977f0ea\"" Jul 7 06:14:42.538467 containerd[1726]: time="2025-07-07T06:14:42.538451793Z" level=info msg="StartContainer for \"355ffb463fe9471f3180a26a0ee09ffc0293e058670eb6ee0e3cee20b977f0ea\"" Jul 7 06:14:42.540117 containerd[1726]: time="2025-07-07T06:14:42.539945456Z" level=info msg="connecting to shim 355ffb463fe9471f3180a26a0ee09ffc0293e058670eb6ee0e3cee20b977f0ea" address="unix:///run/containerd/s/02569b05f9413762ae3a1cb4e001b40a3d0ff296207abe781e6e95ddf15544d1" protocol=ttrpc version=3 Jul 7 06:14:42.540208 containerd[1726]: time="2025-07-07T06:14:42.540189872Z" level=info msg="Container b26f25c77536f34584f19d5cedaaf629999cce27e379a8665245f02a25a57208: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:42.554839 systemd[1]: Started cri-containerd-355ffb463fe9471f3180a26a0ee09ffc0293e058670eb6ee0e3cee20b977f0ea.scope - libcontainer container 355ffb463fe9471f3180a26a0ee09ffc0293e058670eb6ee0e3cee20b977f0ea. Jul 7 06:14:42.566767 containerd[1726]: time="2025-07-07T06:14:42.566285220Z" level=info msg="CreateContainer within sandbox \"220e11156405b4309ea24e3b45da1b5091608ceaa9ad591f4f332eb1d295705a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b26f25c77536f34584f19d5cedaaf629999cce27e379a8665245f02a25a57208\"" Jul 7 06:14:42.567730 containerd[1726]: time="2025-07-07T06:14:42.567152975Z" level=info msg="StartContainer for \"b26f25c77536f34584f19d5cedaaf629999cce27e379a8665245f02a25a57208\"" Jul 7 06:14:42.568122 containerd[1726]: time="2025-07-07T06:14:42.568098835Z" level=info msg="connecting to shim b26f25c77536f34584f19d5cedaaf629999cce27e379a8665245f02a25a57208" address="unix:///run/containerd/s/3dfb63f10572aa9f17d771c2ea310d5d56d16abaf60e8ab93c00d06ceab30c94" protocol=ttrpc version=3 Jul 7 06:14:42.598832 systemd[1]: Started cri-containerd-b26f25c77536f34584f19d5cedaaf629999cce27e379a8665245f02a25a57208.scope - libcontainer container b26f25c77536f34584f19d5cedaaf629999cce27e379a8665245f02a25a57208. Jul 7 06:14:42.624989 containerd[1726]: time="2025-07-07T06:14:42.624956345Z" level=info msg="StartContainer for \"355ffb463fe9471f3180a26a0ee09ffc0293e058670eb6ee0e3cee20b977f0ea\" returns successfully" Jul 7 06:14:42.625392 containerd[1726]: time="2025-07-07T06:14:42.625305244Z" level=info msg="StartContainer for \"93c60bd88ad2919389f83f029cbc4a658a632db7fd0b7399716b2273bc18eb38\" returns successfully" Jul 7 06:14:42.770493 containerd[1726]: time="2025-07-07T06:14:42.770473319Z" level=info msg="StartContainer for \"b26f25c77536f34584f19d5cedaaf629999cce27e379a8665245f02a25a57208\" returns successfully" Jul 7 06:14:43.263896 kubelet[2774]: I0707 06:14:43.263870 2774 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:44.073978 kubelet[2774]: E0707 06:14:44.073940 2774 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.1-a-04b45ab1a6\" not found" node="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:44.125069 kubelet[2774]: E0707 06:14:44.124863 2774 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4372.0.1-a-04b45ab1a6.184fe3774964e7e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-a-04b45ab1a6,UID:ci-4372.0.1-a-04b45ab1a6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-a-04b45ab1a6,},FirstTimestamp:2025-07-07 06:14:41.542375393 +0000 UTC m=+0.186777300,LastTimestamp:2025-07-07 06:14:41.542375393 +0000 UTC m=+0.186777300,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-a-04b45ab1a6,}" Jul 7 06:14:44.152087 kubelet[2774]: I0707 06:14:44.151753 2774 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:44.152087 kubelet[2774]: E0707 06:14:44.151781 2774 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4372.0.1-a-04b45ab1a6\": node \"ci-4372.0.1-a-04b45ab1a6\" not found" Jul 7 06:14:44.163488 kubelet[2774]: E0707 06:14:44.163467 2774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-04b45ab1a6\" not found" Jul 7 06:14:44.177725 kubelet[2774]: E0707 06:14:44.177651 2774 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4372.0.1-a-04b45ab1a6.184fe3774a2e20af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-a-04b45ab1a6,UID:ci-4372.0.1-a-04b45ab1a6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-a-04b45ab1a6,},FirstTimestamp:2025-07-07 06:14:41.555562671 +0000 UTC m=+0.199964576,LastTimestamp:2025-07-07 06:14:41.555562671 +0000 UTC m=+0.199964576,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-a-04b45ab1a6,}" Jul 7 06:14:44.264013 kubelet[2774]: E0707 06:14:44.263981 2774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-04b45ab1a6\" not found" Jul 7 06:14:44.364751 kubelet[2774]: E0707 06:14:44.364640 2774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-04b45ab1a6\" not found" Jul 7 06:14:44.464929 kubelet[2774]: E0707 06:14:44.464910 2774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-04b45ab1a6\" not found" Jul 7 06:14:44.565545 kubelet[2774]: E0707 06:14:44.565513 2774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-04b45ab1a6\" not found" Jul 7 06:14:44.666448 kubelet[2774]: E0707 06:14:44.666367 2774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-04b45ab1a6\" not found" Jul 7 06:14:45.542217 kubelet[2774]: I0707 06:14:45.542192 2774 apiserver.go:52] "Watching apiserver" Jul 7 06:14:45.558883 kubelet[2774]: I0707 06:14:45.558860 2774 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:14:45.657664 kubelet[2774]: W0707 06:14:45.657602 2774 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:14:45.658392 kubelet[2774]: W0707 06:14:45.657935 2774 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:14:46.294492 systemd[1]: Reload requested from client PID 3045 ('systemctl') (unit session-9.scope)... Jul 7 06:14:46.294507 systemd[1]: Reloading... Jul 7 06:14:46.362753 zram_generator::config[3091]: No configuration found. Jul 7 06:14:46.433823 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:14:46.543839 systemd[1]: Reloading finished in 249 ms. Jul 7 06:14:46.566441 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:46.585357 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:14:46.585543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:46.585587 systemd[1]: kubelet.service: Consumed 450ms CPU time, 128.9M memory peak. Jul 7 06:14:46.587110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:47.001113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:47.007961 (kubelet)[3158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:14:47.040457 kubelet[3158]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:47.040457 kubelet[3158]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:14:47.040457 kubelet[3158]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:47.040675 kubelet[3158]: I0707 06:14:47.040514 3158 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:14:47.044715 kubelet[3158]: I0707 06:14:47.044340 3158 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:14:47.044715 kubelet[3158]: I0707 06:14:47.044355 3158 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:14:47.044715 kubelet[3158]: I0707 06:14:47.044495 3158 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:14:47.045328 kubelet[3158]: I0707 06:14:47.045313 3158 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:14:47.048734 kubelet[3158]: I0707 06:14:47.048366 3158 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:14:47.051044 kubelet[3158]: I0707 06:14:47.051022 3158 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:14:47.052593 kubelet[3158]: I0707 06:14:47.052571 3158 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:14:47.052673 kubelet[3158]: I0707 06:14:47.052643 3158 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:14:47.052743 kubelet[3158]: I0707 06:14:47.052721 3158 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:14:47.052862 kubelet[3158]: I0707 06:14:47.052743 3158 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-04b45ab1a6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:14:47.052947 kubelet[3158]: I0707 06:14:47.052868 3158 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:14:47.052947 kubelet[3158]: I0707 06:14:47.052877 3158 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:14:47.052947 kubelet[3158]: I0707 06:14:47.052903 3158 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:47.053014 kubelet[3158]: I0707 06:14:47.052984 3158 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:14:47.053014 kubelet[3158]: I0707 06:14:47.052993 3158 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:14:47.053082 kubelet[3158]: I0707 06:14:47.053016 3158 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:14:47.053082 kubelet[3158]: I0707 06:14:47.053024 3158 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:14:47.057442 kubelet[3158]: I0707 06:14:47.057398 3158 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:14:47.058165 kubelet[3158]: I0707 06:14:47.058153 3158 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:14:47.058952 kubelet[3158]: I0707 06:14:47.058941 3158 server.go:1274] "Started kubelet" Jul 7 06:14:47.061506 kubelet[3158]: I0707 06:14:47.061475 3158 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:14:47.064093 kubelet[3158]: I0707 06:14:47.063350 3158 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:14:47.065479 kubelet[3158]: I0707 06:14:47.065435 3158 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:14:47.067606 kubelet[3158]: I0707 06:14:47.063387 3158 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:14:47.067606 kubelet[3158]: I0707 06:14:47.067447 3158 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:14:47.068100 kubelet[3158]: I0707 06:14:47.068086 3158 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:14:47.070338 kubelet[3158]: I0707 06:14:47.070322 3158 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:14:47.072314 kubelet[3158]: I0707 06:14:47.072295 3158 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:14:47.072411 kubelet[3158]: I0707 06:14:47.072403 3158 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:14:47.073151 kubelet[3158]: I0707 06:14:47.073139 3158 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:14:47.073326 kubelet[3158]: I0707 06:14:47.073312 3158 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:14:47.077760 kubelet[3158]: E0707 06:14:47.077578 3158 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:14:47.078262 kubelet[3158]: I0707 06:14:47.078243 3158 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:14:47.078525 kubelet[3158]: I0707 06:14:47.078511 3158 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:14:47.079569 kubelet[3158]: I0707 06:14:47.079555 3158 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:14:47.079639 kubelet[3158]: I0707 06:14:47.079634 3158 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:14:47.079681 kubelet[3158]: I0707 06:14:47.079677 3158 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:14:47.079773 kubelet[3158]: E0707 06:14:47.079762 3158 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:14:47.117104 kubelet[3158]: I0707 06:14:47.117086 3158 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:14:47.117104 kubelet[3158]: I0707 06:14:47.117096 3158 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:14:47.117184 kubelet[3158]: I0707 06:14:47.117113 3158 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:47.117246 kubelet[3158]: I0707 06:14:47.117237 3158 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:14:47.117300 kubelet[3158]: I0707 06:14:47.117263 3158 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:14:47.117300 kubelet[3158]: I0707 06:14:47.117282 3158 policy_none.go:49] "None policy: Start" Jul 7 06:14:47.117743 kubelet[3158]: I0707 06:14:47.117732 3158 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:14:47.117835 kubelet[3158]: I0707 06:14:47.117748 3158 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:14:47.117878 kubelet[3158]: I0707 06:14:47.117869 3158 state_mem.go:75] "Updated machine memory state" Jul 7 06:14:47.125759 kubelet[3158]: I0707 06:14:47.125725 3158 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:14:47.125974 kubelet[3158]: I0707 06:14:47.125924 3158 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:14:47.125974 kubelet[3158]: I0707 06:14:47.125933 3158 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:14:47.126422 kubelet[3158]: I0707 06:14:47.126328 3158 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:14:47.200460 kubelet[3158]: W0707 06:14:47.200446 3158 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:14:47.200587 kubelet[3158]: E0707 06:14:47.200575 3158 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4372.0.1-a-04b45ab1a6\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:47.200859 kubelet[3158]: W0707 06:14:47.200850 3158 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:14:47.201018 kubelet[3158]: E0707 06:14:47.200938 3158 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372.0.1-a-04b45ab1a6\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:47.201018 kubelet[3158]: W0707 06:14:47.200850 3158 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:14:47.234001 kubelet[3158]: I0707 06:14:47.233987 3158 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:47.263867 kubelet[3158]: I0707 06:14:47.263808 3158 kubelet_node_status.go:111] "Node was previously registered" node="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:47.263867 kubelet[3158]: I0707 06:14:47.263861 3158 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:47.373029 kubelet[3158]: I0707 06:14:47.373003 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45f5dedff73da6632da2f276681b3462-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-04b45ab1a6\" (UID: \"45f5dedff73da6632da2f276681b3462\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:47.373095 kubelet[3158]: I0707 06:14:47.373036 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45f5dedff73da6632da2f276681b3462-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-04b45ab1a6\" (UID: \"45f5dedff73da6632da2f276681b3462\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:47.373198 kubelet[3158]: I0707 06:14:47.373181 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45f5dedff73da6632da2f276681b3462-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-04b45ab1a6\" (UID: \"45f5dedff73da6632da2f276681b3462\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:47.373255 kubelet[3158]: I0707 06:14:47.373244 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/528d28dbe06b060fb48a983a78340b08-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-04b45ab1a6\" (UID: \"528d28dbe06b060fb48a983a78340b08\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:47.373286 kubelet[3158]: I0707 06:14:47.373262 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07a123d679e7737bfd5251cc8b34ce49-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-04b45ab1a6\" (UID: \"07a123d679e7737bfd5251cc8b34ce49\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:47.373314 kubelet[3158]: I0707 06:14:47.373277 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07a123d679e7737bfd5251cc8b34ce49-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-04b45ab1a6\" (UID: \"07a123d679e7737bfd5251cc8b34ce49\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:47.373314 kubelet[3158]: I0707 06:14:47.373307 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/45f5dedff73da6632da2f276681b3462-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-04b45ab1a6\" (UID: \"45f5dedff73da6632da2f276681b3462\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:47.373354 kubelet[3158]: I0707 06:14:47.373324 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07a123d679e7737bfd5251cc8b34ce49-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-04b45ab1a6\" (UID: \"07a123d679e7737bfd5251cc8b34ce49\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:47.373354 kubelet[3158]: I0707 06:14:47.373341 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45f5dedff73da6632da2f276681b3462-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-04b45ab1a6\" (UID: \"45f5dedff73da6632da2f276681b3462\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:48.053548 kubelet[3158]: I0707 06:14:48.053509 3158 apiserver.go:52] "Watching apiserver" Jul 7 06:14:48.072456 kubelet[3158]: I0707 06:14:48.072418 3158 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:14:48.117293 kubelet[3158]: W0707 06:14:48.117237 3158 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:14:48.117456 kubelet[3158]: E0707 06:14:48.117373 3158 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372.0.1-a-04b45ab1a6\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-a-04b45ab1a6" Jul 7 06:14:48.137906 kubelet[3158]: I0707 06:14:48.137750 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-04b45ab1a6" podStartSLOduration=1.137737754 podStartE2EDuration="1.137737754s" podCreationTimestamp="2025-07-07 06:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:48.137639712 +0000 UTC m=+1.126878668" watchObservedRunningTime="2025-07-07 06:14:48.137737754 +0000 UTC m=+1.126976713" Jul 7 06:14:48.137906 kubelet[3158]: I0707 06:14:48.137838 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-a-04b45ab1a6" podStartSLOduration=3.137831078 podStartE2EDuration="3.137831078s" podCreationTimestamp="2025-07-07 06:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:48.12728984 +0000 UTC m=+1.116528799" watchObservedRunningTime="2025-07-07 06:14:48.137831078 +0000 UTC m=+1.127070076" Jul 7 06:14:48.179939 kubelet[3158]: I0707 06:14:48.179497 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-a-04b45ab1a6" podStartSLOduration=3.179484342 podStartE2EDuration="3.179484342s" podCreationTimestamp="2025-07-07 06:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:48.165093338 +0000 UTC m=+1.154332298" watchObservedRunningTime="2025-07-07 06:14:48.179484342 +0000 UTC m=+1.168723302" Jul 7 06:14:51.046033 kubelet[3158]: I0707 06:14:51.046006 3158 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:14:51.046418 kubelet[3158]: I0707 06:14:51.046368 3158 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:14:51.046555 containerd[1726]: time="2025-07-07T06:14:51.046236237Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:14:52.006604 kubelet[3158]: I0707 06:14:52.005670 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aaa87924-5cf9-482b-b565-e6c977be1522-kube-proxy\") pod \"kube-proxy-9f2x9\" (UID: \"aaa87924-5cf9-482b-b565-e6c977be1522\") " pod="kube-system/kube-proxy-9f2x9" Jul 7 06:14:52.006604 kubelet[3158]: I0707 06:14:52.005714 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aaa87924-5cf9-482b-b565-e6c977be1522-xtables-lock\") pod \"kube-proxy-9f2x9\" (UID: \"aaa87924-5cf9-482b-b565-e6c977be1522\") " pod="kube-system/kube-proxy-9f2x9" Jul 7 06:14:52.006604 kubelet[3158]: I0707 06:14:52.005733 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aaa87924-5cf9-482b-b565-e6c977be1522-lib-modules\") pod \"kube-proxy-9f2x9\" (UID: \"aaa87924-5cf9-482b-b565-e6c977be1522\") " pod="kube-system/kube-proxy-9f2x9" Jul 7 06:14:52.006604 kubelet[3158]: I0707 06:14:52.005749 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxnkh\" (UniqueName: \"kubernetes.io/projected/aaa87924-5cf9-482b-b565-e6c977be1522-kube-api-access-vxnkh\") pod \"kube-proxy-9f2x9\" (UID: \"aaa87924-5cf9-482b-b565-e6c977be1522\") " pod="kube-system/kube-proxy-9f2x9" Jul 7 06:14:52.006457 systemd[1]: Created slice kubepods-besteffort-podaaa87924_5cf9_482b_b565_e6c977be1522.slice - libcontainer container kubepods-besteffort-podaaa87924_5cf9_482b_b565_e6c977be1522.slice. Jul 7 06:14:52.248182 systemd[1]: Created slice kubepods-besteffort-pod867067a5_9b81_42aa_8935_eed71488d125.slice - libcontainer container kubepods-besteffort-pod867067a5_9b81_42aa_8935_eed71488d125.slice. Jul 7 06:14:52.307529 kubelet[3158]: I0707 06:14:52.307402 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtjm5\" (UniqueName: \"kubernetes.io/projected/867067a5-9b81-42aa-8935-eed71488d125-kube-api-access-mtjm5\") pod \"tigera-operator-5bf8dfcb4-wfb29\" (UID: \"867067a5-9b81-42aa-8935-eed71488d125\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-wfb29" Jul 7 06:14:52.307529 kubelet[3158]: I0707 06:14:52.307432 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/867067a5-9b81-42aa-8935-eed71488d125-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-wfb29\" (UID: \"867067a5-9b81-42aa-8935-eed71488d125\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-wfb29" Jul 7 06:14:52.320933 containerd[1726]: time="2025-07-07T06:14:52.320887617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9f2x9,Uid:aaa87924-5cf9-482b-b565-e6c977be1522,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:52.384724 containerd[1726]: time="2025-07-07T06:14:52.384675594Z" level=info msg="connecting to shim 0ba3a1626c21ec137b3271f0b24447854262169f97927a62f5e16b3e58adbae6" address="unix:///run/containerd/s/b9b44e72c6f42ad171d92427f1841e5803e525631d30c2a8885730dc21983f34" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:52.408005 systemd[1]: Started cri-containerd-0ba3a1626c21ec137b3271f0b24447854262169f97927a62f5e16b3e58adbae6.scope - libcontainer container 0ba3a1626c21ec137b3271f0b24447854262169f97927a62f5e16b3e58adbae6. Jul 7 06:14:52.437126 containerd[1726]: time="2025-07-07T06:14:52.437107347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9f2x9,Uid:aaa87924-5cf9-482b-b565-e6c977be1522,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ba3a1626c21ec137b3271f0b24447854262169f97927a62f5e16b3e58adbae6\"" Jul 7 06:14:52.439222 containerd[1726]: time="2025-07-07T06:14:52.439192834Z" level=info msg="CreateContainer within sandbox \"0ba3a1626c21ec137b3271f0b24447854262169f97927a62f5e16b3e58adbae6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:14:52.471481 containerd[1726]: time="2025-07-07T06:14:52.471270463Z" level=info msg="Container ebc261e3ab9aba5c6ee4990816ecf136d7af0280f1a20a1fd9c5319fad94bc75: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:52.490194 containerd[1726]: time="2025-07-07T06:14:52.490172736Z" level=info msg="CreateContainer within sandbox \"0ba3a1626c21ec137b3271f0b24447854262169f97927a62f5e16b3e58adbae6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ebc261e3ab9aba5c6ee4990816ecf136d7af0280f1a20a1fd9c5319fad94bc75\"" Jul 7 06:14:52.490619 containerd[1726]: time="2025-07-07T06:14:52.490598858Z" level=info msg="StartContainer for \"ebc261e3ab9aba5c6ee4990816ecf136d7af0280f1a20a1fd9c5319fad94bc75\"" Jul 7 06:14:52.491913 containerd[1726]: time="2025-07-07T06:14:52.491886140Z" level=info msg="connecting to shim ebc261e3ab9aba5c6ee4990816ecf136d7af0280f1a20a1fd9c5319fad94bc75" address="unix:///run/containerd/s/b9b44e72c6f42ad171d92427f1841e5803e525631d30c2a8885730dc21983f34" protocol=ttrpc version=3 Jul 7 06:14:52.508937 systemd[1]: Started cri-containerd-ebc261e3ab9aba5c6ee4990816ecf136d7af0280f1a20a1fd9c5319fad94bc75.scope - libcontainer container ebc261e3ab9aba5c6ee4990816ecf136d7af0280f1a20a1fd9c5319fad94bc75. Jul 7 06:14:52.535948 containerd[1726]: time="2025-07-07T06:14:52.535925360Z" level=info msg="StartContainer for \"ebc261e3ab9aba5c6ee4990816ecf136d7af0280f1a20a1fd9c5319fad94bc75\" returns successfully" Jul 7 06:14:52.551147 containerd[1726]: time="2025-07-07T06:14:52.551126855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-wfb29,Uid:867067a5-9b81-42aa-8935-eed71488d125,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:14:52.610592 containerd[1726]: time="2025-07-07T06:14:52.610196070Z" level=info msg="connecting to shim 7e1223213aa451a2c59bcedecdfde0c1ab0bbcfa4e324cb59ef309dadbc16a41" address="unix:///run/containerd/s/986878d89563abe6eec70e8929f8a4f0343027fc2af0f2d3aabdd88987aaaca0" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:52.629848 systemd[1]: Started cri-containerd-7e1223213aa451a2c59bcedecdfde0c1ab0bbcfa4e324cb59ef309dadbc16a41.scope - libcontainer container 7e1223213aa451a2c59bcedecdfde0c1ab0bbcfa4e324cb59ef309dadbc16a41. Jul 7 06:14:52.664159 containerd[1726]: time="2025-07-07T06:14:52.664137573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-wfb29,Uid:867067a5-9b81-42aa-8935-eed71488d125,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7e1223213aa451a2c59bcedecdfde0c1ab0bbcfa4e324cb59ef309dadbc16a41\"" Jul 7 06:14:52.665484 containerd[1726]: time="2025-07-07T06:14:52.665416870Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:14:53.135731 kubelet[3158]: I0707 06:14:53.135258 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9f2x9" podStartSLOduration=2.13524386 podStartE2EDuration="2.13524386s" podCreationTimestamp="2025-07-07 06:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:53.135095155 +0000 UTC m=+6.124334113" watchObservedRunningTime="2025-07-07 06:14:53.13524386 +0000 UTC m=+6.124482815" Jul 7 06:14:55.386355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396185300.mount: Deactivated successfully. Jul 7 06:14:56.079953 containerd[1726]: time="2025-07-07T06:14:56.079920687Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:56.086852 containerd[1726]: time="2025-07-07T06:14:56.086817736Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 06:14:56.098960 containerd[1726]: time="2025-07-07T06:14:56.098930522Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:56.108435 containerd[1726]: time="2025-07-07T06:14:56.108389564Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:56.108838 containerd[1726]: time="2025-07-07T06:14:56.108755026Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.443293303s" Jul 7 06:14:56.108838 containerd[1726]: time="2025-07-07T06:14:56.108780602Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 06:14:56.110720 containerd[1726]: time="2025-07-07T06:14:56.110340482Z" level=info msg="CreateContainer within sandbox \"7e1223213aa451a2c59bcedecdfde0c1ab0bbcfa4e324cb59ef309dadbc16a41\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:14:56.320104 containerd[1726]: time="2025-07-07T06:14:56.320074261Z" level=info msg="Container 6d8e03b0ce68fba38ae6f4b66fd939308232d9d514825b50a03e4de864c12425: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:56.456830 containerd[1726]: time="2025-07-07T06:14:56.456761991Z" level=info msg="CreateContainer within sandbox \"7e1223213aa451a2c59bcedecdfde0c1ab0bbcfa4e324cb59ef309dadbc16a41\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6d8e03b0ce68fba38ae6f4b66fd939308232d9d514825b50a03e4de864c12425\"" Jul 7 06:14:56.457347 containerd[1726]: time="2025-07-07T06:14:56.457298369Z" level=info msg="StartContainer for \"6d8e03b0ce68fba38ae6f4b66fd939308232d9d514825b50a03e4de864c12425\"" Jul 7 06:14:56.458201 containerd[1726]: time="2025-07-07T06:14:56.458167560Z" level=info msg="connecting to shim 6d8e03b0ce68fba38ae6f4b66fd939308232d9d514825b50a03e4de864c12425" address="unix:///run/containerd/s/986878d89563abe6eec70e8929f8a4f0343027fc2af0f2d3aabdd88987aaaca0" protocol=ttrpc version=3 Jul 7 06:14:56.476818 systemd[1]: Started cri-containerd-6d8e03b0ce68fba38ae6f4b66fd939308232d9d514825b50a03e4de864c12425.scope - libcontainer container 6d8e03b0ce68fba38ae6f4b66fd939308232d9d514825b50a03e4de864c12425. Jul 7 06:14:56.502186 containerd[1726]: time="2025-07-07T06:14:56.502164084Z" level=info msg="StartContainer for \"6d8e03b0ce68fba38ae6f4b66fd939308232d9d514825b50a03e4de864c12425\" returns successfully" Jul 7 06:14:59.953510 kubelet[3158]: I0707 06:14:59.953301 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-wfb29" podStartSLOduration=4.508844774 podStartE2EDuration="7.953284845s" podCreationTimestamp="2025-07-07 06:14:52 +0000 UTC" firstStartedPulling="2025-07-07 06:14:52.664907486 +0000 UTC m=+5.654146437" lastFinishedPulling="2025-07-07 06:14:56.109347561 +0000 UTC m=+9.098586508" observedRunningTime="2025-07-07 06:14:57.261628621 +0000 UTC m=+10.250867579" watchObservedRunningTime="2025-07-07 06:14:59.953284845 +0000 UTC m=+12.942523860" Jul 7 06:15:03.646254 sudo[2156]: pam_unix(sudo:session): session closed for user root Jul 7 06:15:03.745000 sshd[2155]: Connection closed by 10.200.16.10 port 51288 Jul 7 06:15:03.745835 sshd-session[2150]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:03.748690 systemd[1]: sshd@6-10.200.4.33:22-10.200.16.10:51288.service: Deactivated successfully. Jul 7 06:15:03.751547 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:15:03.751842 systemd[1]: session-9.scope: Consumed 2.889s CPU time, 222.8M memory peak. Jul 7 06:15:03.754990 systemd-logind[1703]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:15:03.758108 systemd-logind[1703]: Removed session 9. Jul 7 06:15:09.024727 systemd[1]: Created slice kubepods-besteffort-podb815b486_7d7f_4a1d_8925_8c713a13163b.slice - libcontainer container kubepods-besteffort-podb815b486_7d7f_4a1d_8925_8c713a13163b.slice. Jul 7 06:15:09.116470 kubelet[3158]: I0707 06:15:09.116437 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w4ns\" (UniqueName: \"kubernetes.io/projected/b815b486-7d7f-4a1d-8925-8c713a13163b-kube-api-access-4w4ns\") pod \"calico-typha-7cd4dff845-9cc2l\" (UID: \"b815b486-7d7f-4a1d-8925-8c713a13163b\") " pod="calico-system/calico-typha-7cd4dff845-9cc2l" Jul 7 06:15:09.116470 kubelet[3158]: I0707 06:15:09.116471 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b815b486-7d7f-4a1d-8925-8c713a13163b-tigera-ca-bundle\") pod \"calico-typha-7cd4dff845-9cc2l\" (UID: \"b815b486-7d7f-4a1d-8925-8c713a13163b\") " pod="calico-system/calico-typha-7cd4dff845-9cc2l" Jul 7 06:15:09.116797 kubelet[3158]: I0707 06:15:09.116488 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b815b486-7d7f-4a1d-8925-8c713a13163b-typha-certs\") pod \"calico-typha-7cd4dff845-9cc2l\" (UID: \"b815b486-7d7f-4a1d-8925-8c713a13163b\") " pod="calico-system/calico-typha-7cd4dff845-9cc2l" Jul 7 06:15:09.337009 containerd[1726]: time="2025-07-07T06:15:09.336910386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cd4dff845-9cc2l,Uid:b815b486-7d7f-4a1d-8925-8c713a13163b,Namespace:calico-system,Attempt:0,}" Jul 7 06:15:09.357084 systemd[1]: Created slice kubepods-besteffort-pod10df1d0a_632f_4649_bf66_b211783c2d38.slice - libcontainer container kubepods-besteffort-pod10df1d0a_632f_4649_bf66_b211783c2d38.slice. Jul 7 06:15:09.410145 containerd[1726]: time="2025-07-07T06:15:09.410110464Z" level=info msg="connecting to shim d633f4e8c4c185e294d8e9a35746ef8c2c116c98a51401d08da34a07c9cb66cc" address="unix:///run/containerd/s/beb8d2505bd795a5d47f9b38a7c16d4aacfb93fae9fef8570e186674be7f667b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:09.418943 kubelet[3158]: I0707 06:15:09.418901 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/10df1d0a-632f-4649-bf66-b211783c2d38-policysync\") pod \"calico-node-v5gcp\" (UID: \"10df1d0a-632f-4649-bf66-b211783c2d38\") " pod="calico-system/calico-node-v5gcp" Jul 7 06:15:09.419116 kubelet[3158]: I0707 06:15:09.419052 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sknhg\" (UniqueName: \"kubernetes.io/projected/10df1d0a-632f-4649-bf66-b211783c2d38-kube-api-access-sknhg\") pod \"calico-node-v5gcp\" (UID: \"10df1d0a-632f-4649-bf66-b211783c2d38\") " pod="calico-system/calico-node-v5gcp" Jul 7 06:15:09.419116 kubelet[3158]: I0707 06:15:09.419083 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/10df1d0a-632f-4649-bf66-b211783c2d38-node-certs\") pod \"calico-node-v5gcp\" (UID: \"10df1d0a-632f-4649-bf66-b211783c2d38\") " pod="calico-system/calico-node-v5gcp" Jul 7 06:15:09.419116 kubelet[3158]: I0707 06:15:09.419099 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/10df1d0a-632f-4649-bf66-b211783c2d38-cni-net-dir\") pod \"calico-node-v5gcp\" (UID: \"10df1d0a-632f-4649-bf66-b211783c2d38\") " pod="calico-system/calico-node-v5gcp" Jul 7 06:15:09.419277 kubelet[3158]: I0707 06:15:09.419209 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/10df1d0a-632f-4649-bf66-b211783c2d38-flexvol-driver-host\") pod \"calico-node-v5gcp\" (UID: \"10df1d0a-632f-4649-bf66-b211783c2d38\") " pod="calico-system/calico-node-v5gcp" Jul 7 06:15:09.419277 kubelet[3158]: I0707 06:15:09.419225 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10df1d0a-632f-4649-bf66-b211783c2d38-lib-modules\") pod \"calico-node-v5gcp\" (UID: \"10df1d0a-632f-4649-bf66-b211783c2d38\") " pod="calico-system/calico-node-v5gcp" Jul 7 06:15:09.419277 kubelet[3158]: I0707 06:15:09.419240 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/10df1d0a-632f-4649-bf66-b211783c2d38-var-run-calico\") pod \"calico-node-v5gcp\" (UID: \"10df1d0a-632f-4649-bf66-b211783c2d38\") " pod="calico-system/calico-node-v5gcp" Jul 7 06:15:09.419277 kubelet[3158]: I0707 06:15:09.419256 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10df1d0a-632f-4649-bf66-b211783c2d38-xtables-lock\") pod \"calico-node-v5gcp\" (UID: \"10df1d0a-632f-4649-bf66-b211783c2d38\") " pod="calico-system/calico-node-v5gcp" Jul 7 06:15:09.419495 kubelet[3158]: I0707 06:15:09.419375 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10df1d0a-632f-4649-bf66-b211783c2d38-tigera-ca-bundle\") pod \"calico-node-v5gcp\" (UID: \"10df1d0a-632f-4649-bf66-b211783c2d38\") " pod="calico-system/calico-node-v5gcp" Jul 7 06:15:09.419495 kubelet[3158]: I0707 06:15:09.419409 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/10df1d0a-632f-4649-bf66-b211783c2d38-cni-log-dir\") pod \"calico-node-v5gcp\" (UID: \"10df1d0a-632f-4649-bf66-b211783c2d38\") " pod="calico-system/calico-node-v5gcp" Jul 7 06:15:09.419495 kubelet[3158]: I0707 06:15:09.419429 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/10df1d0a-632f-4649-bf66-b211783c2d38-var-lib-calico\") pod \"calico-node-v5gcp\" (UID: \"10df1d0a-632f-4649-bf66-b211783c2d38\") " pod="calico-system/calico-node-v5gcp" Jul 7 06:15:09.419495 kubelet[3158]: I0707 06:15:09.419468 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/10df1d0a-632f-4649-bf66-b211783c2d38-cni-bin-dir\") pod \"calico-node-v5gcp\" (UID: \"10df1d0a-632f-4649-bf66-b211783c2d38\") " pod="calico-system/calico-node-v5gcp" Jul 7 06:15:09.431876 systemd[1]: Started cri-containerd-d633f4e8c4c185e294d8e9a35746ef8c2c116c98a51401d08da34a07c9cb66cc.scope - libcontainer container d633f4e8c4c185e294d8e9a35746ef8c2c116c98a51401d08da34a07c9cb66cc. Jul 7 06:15:09.474152 containerd[1726]: time="2025-07-07T06:15:09.473847085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cd4dff845-9cc2l,Uid:b815b486-7d7f-4a1d-8925-8c713a13163b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d633f4e8c4c185e294d8e9a35746ef8c2c116c98a51401d08da34a07c9cb66cc\"" Jul 7 06:15:09.478102 containerd[1726]: time="2025-07-07T06:15:09.477302778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:15:09.621461 kubelet[3158]: E0707 06:15:09.621403 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.621461 kubelet[3158]: W0707 06:15:09.621420 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.621461 kubelet[3158]: E0707 06:15:09.621436 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.654759 kubelet[3158]: E0707 06:15:09.654729 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xpqzl" podUID="231c5af8-3370-4b3f-ab8c-8299c58a8f69" Jul 7 06:15:09.702446 kubelet[3158]: E0707 06:15:09.702395 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.702446 kubelet[3158]: W0707 06:15:09.702412 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.702446 kubelet[3158]: E0707 06:15:09.702428 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.702883 kubelet[3158]: E0707 06:15:09.702792 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.702883 kubelet[3158]: W0707 06:15:09.702814 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.702883 kubelet[3158]: E0707 06:15:09.702825 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.703164 kubelet[3158]: E0707 06:15:09.703109 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.703164 kubelet[3158]: W0707 06:15:09.703124 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.703164 kubelet[3158]: E0707 06:15:09.703134 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.703713 kubelet[3158]: E0707 06:15:09.703546 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.703713 kubelet[3158]: W0707 06:15:09.703557 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.703713 kubelet[3158]: E0707 06:15:09.703568 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.704079 kubelet[3158]: E0707 06:15:09.703939 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.704079 kubelet[3158]: W0707 06:15:09.703947 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.704079 kubelet[3158]: E0707 06:15:09.703955 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.704379 kubelet[3158]: E0707 06:15:09.704236 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.704379 kubelet[3158]: W0707 06:15:09.704243 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.704379 kubelet[3158]: E0707 06:15:09.704250 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.704645 kubelet[3158]: E0707 06:15:09.704582 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.704645 kubelet[3158]: W0707 06:15:09.704589 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.704645 kubelet[3158]: E0707 06:15:09.704596 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.705060 kubelet[3158]: E0707 06:15:09.704963 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.705060 kubelet[3158]: W0707 06:15:09.704975 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.705060 kubelet[3158]: E0707 06:15:09.704986 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.705392 kubelet[3158]: E0707 06:15:09.705382 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.705588 kubelet[3158]: W0707 06:15:09.705577 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.705976 kubelet[3158]: E0707 06:15:09.705818 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.706261 kubelet[3158]: E0707 06:15:09.706121 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.706419 kubelet[3158]: W0707 06:15:09.706329 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.706476 kubelet[3158]: E0707 06:15:09.706468 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.706999 kubelet[3158]: E0707 06:15:09.706842 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.706999 kubelet[3158]: W0707 06:15:09.706854 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.706999 kubelet[3158]: E0707 06:15:09.706865 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.707574 kubelet[3158]: E0707 06:15:09.707378 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.707574 kubelet[3158]: W0707 06:15:09.707390 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.707574 kubelet[3158]: E0707 06:15:09.707402 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.708211 kubelet[3158]: E0707 06:15:09.708048 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.708211 kubelet[3158]: W0707 06:15:09.708061 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.708211 kubelet[3158]: E0707 06:15:09.708073 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.708648 kubelet[3158]: E0707 06:15:09.708549 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.708887 kubelet[3158]: W0707 06:15:09.708561 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.708887 kubelet[3158]: E0707 06:15:09.708827 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.709278 kubelet[3158]: E0707 06:15:09.709265 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.709589 kubelet[3158]: W0707 06:15:09.709410 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.709589 kubelet[3158]: E0707 06:15:09.709446 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.710045 kubelet[3158]: E0707 06:15:09.710020 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.710045 kubelet[3158]: W0707 06:15:09.710033 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.710268 kubelet[3158]: E0707 06:15:09.710209 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.710581 kubelet[3158]: E0707 06:15:09.710521 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.710581 kubelet[3158]: W0707 06:15:09.710569 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.710872 kubelet[3158]: E0707 06:15:09.710804 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.711081 kubelet[3158]: E0707 06:15:09.711061 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.711081 kubelet[3158]: W0707 06:15:09.711070 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.711261 kubelet[3158]: E0707 06:15:09.711239 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.711524 kubelet[3158]: E0707 06:15:09.711459 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.711524 kubelet[3158]: W0707 06:15:09.711467 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.711524 kubelet[3158]: E0707 06:15:09.711475 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.711893 kubelet[3158]: E0707 06:15:09.711769 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.711893 kubelet[3158]: W0707 06:15:09.711777 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.711893 kubelet[3158]: E0707 06:15:09.711785 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.712270 kubelet[3158]: E0707 06:15:09.712202 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.712270 kubelet[3158]: W0707 06:15:09.712212 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.712270 kubelet[3158]: E0707 06:15:09.712222 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.722539 kubelet[3158]: E0707 06:15:09.722509 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.722539 kubelet[3158]: W0707 06:15:09.722535 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.722683 kubelet[3158]: E0707 06:15:09.722545 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.722683 kubelet[3158]: I0707 06:15:09.722566 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdqvl\" (UniqueName: \"kubernetes.io/projected/231c5af8-3370-4b3f-ab8c-8299c58a8f69-kube-api-access-pdqvl\") pod \"csi-node-driver-xpqzl\" (UID: \"231c5af8-3370-4b3f-ab8c-8299c58a8f69\") " pod="calico-system/csi-node-driver-xpqzl" Jul 7 06:15:09.722755 kubelet[3158]: E0707 06:15:09.722721 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.722755 kubelet[3158]: W0707 06:15:09.722730 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.722755 kubelet[3158]: E0707 06:15:09.722741 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.722819 kubelet[3158]: I0707 06:15:09.722758 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/231c5af8-3370-4b3f-ab8c-8299c58a8f69-socket-dir\") pod \"csi-node-driver-xpqzl\" (UID: \"231c5af8-3370-4b3f-ab8c-8299c58a8f69\") " pod="calico-system/csi-node-driver-xpqzl" Jul 7 06:15:09.722892 kubelet[3158]: E0707 06:15:09.722880 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.722892 kubelet[3158]: W0707 06:15:09.722890 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.722948 kubelet[3158]: E0707 06:15:09.722904 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.722948 kubelet[3158]: I0707 06:15:09.722918 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/231c5af8-3370-4b3f-ab8c-8299c58a8f69-registration-dir\") pod \"csi-node-driver-xpqzl\" (UID: \"231c5af8-3370-4b3f-ab8c-8299c58a8f69\") " pod="calico-system/csi-node-driver-xpqzl" Jul 7 06:15:09.723046 kubelet[3158]: E0707 06:15:09.723021 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.723046 kubelet[3158]: W0707 06:15:09.723043 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.723132 kubelet[3158]: E0707 06:15:09.723050 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.723132 kubelet[3158]: I0707 06:15:09.723064 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/231c5af8-3370-4b3f-ab8c-8299c58a8f69-varrun\") pod \"csi-node-driver-xpqzl\" (UID: \"231c5af8-3370-4b3f-ab8c-8299c58a8f69\") " pod="calico-system/csi-node-driver-xpqzl" Jul 7 06:15:09.723191 kubelet[3158]: E0707 06:15:09.723178 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.723191 kubelet[3158]: W0707 06:15:09.723185 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.723231 kubelet[3158]: E0707 06:15:09.723194 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.723302 kubelet[3158]: E0707 06:15:09.723277 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.723302 kubelet[3158]: W0707 06:15:09.723283 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.723302 kubelet[3158]: E0707 06:15:09.723293 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.723444 kubelet[3158]: E0707 06:15:09.723425 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.723444 kubelet[3158]: W0707 06:15:09.723443 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.723487 kubelet[3158]: E0707 06:15:09.723453 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.723545 kubelet[3158]: E0707 06:15:09.723532 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.723545 kubelet[3158]: W0707 06:15:09.723540 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.723614 kubelet[3158]: E0707 06:15:09.723546 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.723668 kubelet[3158]: E0707 06:15:09.723646 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.723668 kubelet[3158]: W0707 06:15:09.723666 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.723720 kubelet[3158]: E0707 06:15:09.723677 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.723720 kubelet[3158]: I0707 06:15:09.723693 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/231c5af8-3370-4b3f-ab8c-8299c58a8f69-kubelet-dir\") pod \"csi-node-driver-xpqzl\" (UID: \"231c5af8-3370-4b3f-ab8c-8299c58a8f69\") " pod="calico-system/csi-node-driver-xpqzl" Jul 7 06:15:09.723815 kubelet[3158]: E0707 06:15:09.723791 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.723815 kubelet[3158]: W0707 06:15:09.723813 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.723858 kubelet[3158]: E0707 06:15:09.723826 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.723927 kubelet[3158]: E0707 06:15:09.723902 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.723927 kubelet[3158]: W0707 06:15:09.723924 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.723983 kubelet[3158]: E0707 06:15:09.723933 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.724062 kubelet[3158]: E0707 06:15:09.724038 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.724062 kubelet[3158]: W0707 06:15:09.724059 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.724126 kubelet[3158]: E0707 06:15:09.724066 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.724159 kubelet[3158]: E0707 06:15:09.724153 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.724178 kubelet[3158]: W0707 06:15:09.724160 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.724178 kubelet[3158]: E0707 06:15:09.724172 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.724313 kubelet[3158]: E0707 06:15:09.724305 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.724313 kubelet[3158]: W0707 06:15:09.724311 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.724355 kubelet[3158]: E0707 06:15:09.724317 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.724417 kubelet[3158]: E0707 06:15:09.724395 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.724417 kubelet[3158]: W0707 06:15:09.724415 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.724461 kubelet[3158]: E0707 06:15:09.724420 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.824722 kubelet[3158]: E0707 06:15:09.824412 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.824722 kubelet[3158]: W0707 06:15:09.824425 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.824722 kubelet[3158]: E0707 06:15:09.824437 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.824851 kubelet[3158]: E0707 06:15:09.824837 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.824877 kubelet[3158]: W0707 06:15:09.824852 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.824877 kubelet[3158]: E0707 06:15:09.824871 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.825217 kubelet[3158]: E0707 06:15:09.825201 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.825217 kubelet[3158]: W0707 06:15:09.825216 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.825293 kubelet[3158]: E0707 06:15:09.825239 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.825598 kubelet[3158]: E0707 06:15:09.825583 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.825696 kubelet[3158]: W0707 06:15:09.825622 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.825696 kubelet[3158]: E0707 06:15:09.825651 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.826535 kubelet[3158]: E0707 06:15:09.825978 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.826535 kubelet[3158]: W0707 06:15:09.825989 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.826535 kubelet[3158]: E0707 06:15:09.826086 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.826535 kubelet[3158]: E0707 06:15:09.826307 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.826535 kubelet[3158]: W0707 06:15:09.826314 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.826535 kubelet[3158]: E0707 06:15:09.826499 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.829434 kubelet[3158]: E0707 06:15:09.826627 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.829434 kubelet[3158]: W0707 06:15:09.826634 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.829434 kubelet[3158]: E0707 06:15:09.826713 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.829434 kubelet[3158]: E0707 06:15:09.826868 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.829434 kubelet[3158]: W0707 06:15:09.826874 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.829434 kubelet[3158]: E0707 06:15:09.826950 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.829434 kubelet[3158]: E0707 06:15:09.827127 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.829434 kubelet[3158]: W0707 06:15:09.827134 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.829434 kubelet[3158]: E0707 06:15:09.827305 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.829434 kubelet[3158]: E0707 06:15:09.827346 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.829664 kubelet[3158]: W0707 06:15:09.827352 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.829664 kubelet[3158]: E0707 06:15:09.827524 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.829664 kubelet[3158]: W0707 06:15:09.827533 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.829664 kubelet[3158]: E0707 06:15:09.827543 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.829664 kubelet[3158]: E0707 06:15:09.827721 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.829664 kubelet[3158]: E0707 06:15:09.827943 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.829664 kubelet[3158]: W0707 06:15:09.827952 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.829664 kubelet[3158]: E0707 06:15:09.827973 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.829664 kubelet[3158]: E0707 06:15:09.828298 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.829664 kubelet[3158]: W0707 06:15:09.828308 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.830125 kubelet[3158]: E0707 06:15:09.828343 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.830125 kubelet[3158]: E0707 06:15:09.828569 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.830125 kubelet[3158]: W0707 06:15:09.828577 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.830125 kubelet[3158]: E0707 06:15:09.828671 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.830125 kubelet[3158]: E0707 06:15:09.828866 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.830125 kubelet[3158]: W0707 06:15:09.828875 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.830125 kubelet[3158]: E0707 06:15:09.828955 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.830125 kubelet[3158]: E0707 06:15:09.829081 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.830125 kubelet[3158]: W0707 06:15:09.829088 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.830125 kubelet[3158]: E0707 06:15:09.829217 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.830319 kubelet[3158]: E0707 06:15:09.829336 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.830319 kubelet[3158]: W0707 06:15:09.829343 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.830319 kubelet[3158]: E0707 06:15:09.829503 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.830319 kubelet[3158]: E0707 06:15:09.829541 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.830319 kubelet[3158]: W0707 06:15:09.829547 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.830319 kubelet[3158]: E0707 06:15:09.829724 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.830319 kubelet[3158]: E0707 06:15:09.829817 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.830319 kubelet[3158]: W0707 06:15:09.829823 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.830319 kubelet[3158]: E0707 06:15:09.829921 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.830319 kubelet[3158]: E0707 06:15:09.830118 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.830512 kubelet[3158]: W0707 06:15:09.830123 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.830512 kubelet[3158]: E0707 06:15:09.830166 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.831201 kubelet[3158]: E0707 06:15:09.830593 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.831201 kubelet[3158]: W0707 06:15:09.830601 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.831201 kubelet[3158]: E0707 06:15:09.830613 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.831201 kubelet[3158]: E0707 06:15:09.830899 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.831201 kubelet[3158]: W0707 06:15:09.830907 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.831201 kubelet[3158]: E0707 06:15:09.831120 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.831355 kubelet[3158]: E0707 06:15:09.831239 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.831355 kubelet[3158]: W0707 06:15:09.831247 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.831355 kubelet[3158]: E0707 06:15:09.831257 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.831589 kubelet[3158]: E0707 06:15:09.831442 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.831589 kubelet[3158]: W0707 06:15:09.831449 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.831589 kubelet[3158]: E0707 06:15:09.831461 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.831668 kubelet[3158]: E0707 06:15:09.831600 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.831668 kubelet[3158]: W0707 06:15:09.831605 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.831668 kubelet[3158]: E0707 06:15:09.831612 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.913103 kubelet[3158]: E0707 06:15:09.913002 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:09.913103 kubelet[3158]: W0707 06:15:09.913023 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:09.913103 kubelet[3158]: E0707 06:15:09.913035 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:09.960262 containerd[1726]: time="2025-07-07T06:15:09.960236850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v5gcp,Uid:10df1d0a-632f-4649-bf66-b211783c2d38,Namespace:calico-system,Attempt:0,}" Jul 7 06:15:10.053328 containerd[1726]: time="2025-07-07T06:15:10.053280095Z" level=info msg="connecting to shim b8e420e7dd1c5efd2859784afe22fc14b30ca302e77bb0c659c0ae03c25b1c3e" address="unix:///run/containerd/s/d563630c667d9c96a28dac8b04c3558a263a5166387867b601ac51e2c363f496" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:10.078914 systemd[1]: Started cri-containerd-b8e420e7dd1c5efd2859784afe22fc14b30ca302e77bb0c659c0ae03c25b1c3e.scope - libcontainer container b8e420e7dd1c5efd2859784afe22fc14b30ca302e77bb0c659c0ae03c25b1c3e. Jul 7 06:15:10.127799 containerd[1726]: time="2025-07-07T06:15:10.127772115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v5gcp,Uid:10df1d0a-632f-4649-bf66-b211783c2d38,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8e420e7dd1c5efd2859784afe22fc14b30ca302e77bb0c659c0ae03c25b1c3e\"" Jul 7 06:15:11.054335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2807533333.mount: Deactivated successfully. Jul 7 06:15:11.080838 kubelet[3158]: E0707 06:15:11.080775 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xpqzl" podUID="231c5af8-3370-4b3f-ab8c-8299c58a8f69" Jul 7 06:15:12.182278 containerd[1726]: time="2025-07-07T06:15:12.182242617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:12.186686 containerd[1726]: time="2025-07-07T06:15:12.186653201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 06:15:12.190178 containerd[1726]: time="2025-07-07T06:15:12.190146555Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:12.194481 containerd[1726]: time="2025-07-07T06:15:12.194444437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:12.194961 containerd[1726]: time="2025-07-07T06:15:12.194690458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.717363713s" Jul 7 06:15:12.194961 containerd[1726]: time="2025-07-07T06:15:12.194744373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 06:15:12.195689 containerd[1726]: time="2025-07-07T06:15:12.195666652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:15:12.203468 containerd[1726]: time="2025-07-07T06:15:12.202671947Z" level=info msg="CreateContainer within sandbox \"d633f4e8c4c185e294d8e9a35746ef8c2c116c98a51401d08da34a07c9cb66cc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:15:12.236575 containerd[1726]: time="2025-07-07T06:15:12.235774287Z" level=info msg="Container 3e655e5fe988d8bca122e251c9b609897bae2524538cfe13faa16f09bb840c9d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:12.263792 containerd[1726]: time="2025-07-07T06:15:12.263770941Z" level=info msg="CreateContainer within sandbox \"d633f4e8c4c185e294d8e9a35746ef8c2c116c98a51401d08da34a07c9cb66cc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3e655e5fe988d8bca122e251c9b609897bae2524538cfe13faa16f09bb840c9d\"" Jul 7 06:15:12.264190 containerd[1726]: time="2025-07-07T06:15:12.264157081Z" level=info msg="StartContainer for \"3e655e5fe988d8bca122e251c9b609897bae2524538cfe13faa16f09bb840c9d\"" Jul 7 06:15:12.265196 containerd[1726]: time="2025-07-07T06:15:12.265172444Z" level=info msg="connecting to shim 3e655e5fe988d8bca122e251c9b609897bae2524538cfe13faa16f09bb840c9d" address="unix:///run/containerd/s/beb8d2505bd795a5d47f9b38a7c16d4aacfb93fae9fef8570e186674be7f667b" protocol=ttrpc version=3 Jul 7 06:15:12.284871 systemd[1]: Started cri-containerd-3e655e5fe988d8bca122e251c9b609897bae2524538cfe13faa16f09bb840c9d.scope - libcontainer container 3e655e5fe988d8bca122e251c9b609897bae2524538cfe13faa16f09bb840c9d. Jul 7 06:15:12.326230 containerd[1726]: time="2025-07-07T06:15:12.326200887Z" level=info msg="StartContainer for \"3e655e5fe988d8bca122e251c9b609897bae2524538cfe13faa16f09bb840c9d\" returns successfully" Jul 7 06:15:13.080723 kubelet[3158]: E0707 06:15:13.080648 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xpqzl" podUID="231c5af8-3370-4b3f-ab8c-8299c58a8f69" Jul 7 06:15:13.235203 kubelet[3158]: E0707 06:15:13.235179 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.235444 kubelet[3158]: W0707 06:15:13.235318 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.235444 kubelet[3158]: E0707 06:15:13.235341 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.235653 kubelet[3158]: E0707 06:15:13.235579 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.235653 kubelet[3158]: W0707 06:15:13.235588 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.235653 kubelet[3158]: E0707 06:15:13.235598 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.235835 kubelet[3158]: E0707 06:15:13.235824 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.235906 kubelet[3158]: W0707 06:15:13.235836 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.235926 kubelet[3158]: E0707 06:15:13.235912 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.236209 kubelet[3158]: E0707 06:15:13.236195 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.236209 kubelet[3158]: W0707 06:15:13.236207 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.236293 kubelet[3158]: E0707 06:15:13.236218 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.236461 kubelet[3158]: E0707 06:15:13.236345 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.236461 kubelet[3158]: W0707 06:15:13.236351 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.236461 kubelet[3158]: E0707 06:15:13.236358 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.236461 kubelet[3158]: E0707 06:15:13.236462 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.236669 kubelet[3158]: W0707 06:15:13.236467 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.236669 kubelet[3158]: E0707 06:15:13.236474 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.236669 kubelet[3158]: E0707 06:15:13.236650 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.236669 kubelet[3158]: W0707 06:15:13.236657 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.236669 kubelet[3158]: E0707 06:15:13.236665 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.236964 kubelet[3158]: E0707 06:15:13.236902 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.236964 kubelet[3158]: W0707 06:15:13.236912 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.236964 kubelet[3158]: E0707 06:15:13.236923 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.237179 kubelet[3158]: E0707 06:15:13.237137 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.237179 kubelet[3158]: W0707 06:15:13.237146 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.237179 kubelet[3158]: E0707 06:15:13.237155 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.237389 kubelet[3158]: E0707 06:15:13.237326 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.237389 kubelet[3158]: W0707 06:15:13.237333 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.237389 kubelet[3158]: E0707 06:15:13.237341 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.237659 kubelet[3158]: E0707 06:15:13.237600 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.237659 kubelet[3158]: W0707 06:15:13.237609 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.237659 kubelet[3158]: E0707 06:15:13.237629 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.237934 kubelet[3158]: E0707 06:15:13.237880 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.237934 kubelet[3158]: W0707 06:15:13.237890 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.237934 kubelet[3158]: E0707 06:15:13.237900 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.238191 kubelet[3158]: E0707 06:15:13.238173 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.238244 kubelet[3158]: W0707 06:15:13.238235 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.238353 kubelet[3158]: E0707 06:15:13.238290 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.238420 kubelet[3158]: E0707 06:15:13.238412 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.238572 kubelet[3158]: W0707 06:15:13.238421 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.238572 kubelet[3158]: E0707 06:15:13.238430 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.238572 kubelet[3158]: E0707 06:15:13.238508 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.238572 kubelet[3158]: W0707 06:15:13.238512 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.238572 kubelet[3158]: E0707 06:15:13.238518 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.248422 kubelet[3158]: E0707 06:15:13.248394 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.248422 kubelet[3158]: W0707 06:15:13.248417 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.248524 kubelet[3158]: E0707 06:15:13.248427 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.248546 kubelet[3158]: E0707 06:15:13.248541 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.248564 kubelet[3158]: W0707 06:15:13.248547 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.248564 kubelet[3158]: E0707 06:15:13.248554 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.248679 kubelet[3158]: E0707 06:15:13.248662 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.248679 kubelet[3158]: W0707 06:15:13.248676 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.248744 kubelet[3158]: E0707 06:15:13.248690 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.248836 kubelet[3158]: E0707 06:15:13.248808 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.248836 kubelet[3158]: W0707 06:15:13.248830 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.248892 kubelet[3158]: E0707 06:15:13.248840 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.248964 kubelet[3158]: E0707 06:15:13.248958 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.248990 kubelet[3158]: W0707 06:15:13.248975 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.248990 kubelet[3158]: E0707 06:15:13.248986 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.249075 kubelet[3158]: E0707 06:15:13.249064 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.249075 kubelet[3158]: W0707 06:15:13.249071 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.249122 kubelet[3158]: E0707 06:15:13.249077 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.249198 kubelet[3158]: E0707 06:15:13.249180 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.249198 kubelet[3158]: W0707 06:15:13.249197 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.249243 kubelet[3158]: E0707 06:15:13.249205 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.249411 kubelet[3158]: E0707 06:15:13.249389 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.249411 kubelet[3158]: W0707 06:15:13.249410 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.249454 kubelet[3158]: E0707 06:15:13.249418 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.249591 kubelet[3158]: E0707 06:15:13.249578 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.249591 kubelet[3158]: W0707 06:15:13.249588 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.249647 kubelet[3158]: E0707 06:15:13.249601 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.249767 kubelet[3158]: E0707 06:15:13.249745 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.249767 kubelet[3158]: W0707 06:15:13.249766 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.249844 kubelet[3158]: E0707 06:15:13.249775 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.249883 kubelet[3158]: E0707 06:15:13.249870 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.249883 kubelet[3158]: W0707 06:15:13.249877 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.249970 kubelet[3158]: E0707 06:15:13.249951 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.249970 kubelet[3158]: E0707 06:15:13.249957 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.249970 kubelet[3158]: W0707 06:15:13.249961 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.250076 kubelet[3158]: E0707 06:15:13.250024 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.250076 kubelet[3158]: E0707 06:15:13.250051 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.250076 kubelet[3158]: W0707 06:15:13.250055 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.250076 kubelet[3158]: E0707 06:15:13.250066 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.250189 kubelet[3158]: E0707 06:15:13.250177 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.250189 kubelet[3158]: W0707 06:15:13.250182 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.250227 kubelet[3158]: E0707 06:15:13.250194 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.250322 kubelet[3158]: E0707 06:15:13.250316 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.250348 kubelet[3158]: W0707 06:15:13.250323 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.250348 kubelet[3158]: E0707 06:15:13.250329 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.250436 kubelet[3158]: E0707 06:15:13.250416 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.250436 kubelet[3158]: W0707 06:15:13.250421 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.250436 kubelet[3158]: E0707 06:15:13.250428 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.250747 kubelet[3158]: E0707 06:15:13.250734 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.250747 kubelet[3158]: W0707 06:15:13.250746 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.250902 kubelet[3158]: E0707 06:15:13.250758 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.250902 kubelet[3158]: E0707 06:15:13.250871 3158 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:15:13.250902 kubelet[3158]: W0707 06:15:13.250875 3158 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:15:13.250902 kubelet[3158]: E0707 06:15:13.250882 3158 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:15:13.884804 containerd[1726]: time="2025-07-07T06:15:13.884770968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:13.889947 containerd[1726]: time="2025-07-07T06:15:13.889919698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 06:15:13.896921 containerd[1726]: time="2025-07-07T06:15:13.896859159Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:13.904963 containerd[1726]: time="2025-07-07T06:15:13.904916873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:13.905426 containerd[1726]: time="2025-07-07T06:15:13.905286509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.709591623s" Jul 7 06:15:13.905426 containerd[1726]: time="2025-07-07T06:15:13.905312599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 06:15:13.907171 containerd[1726]: time="2025-07-07T06:15:13.907150623Z" level=info msg="CreateContainer within sandbox \"b8e420e7dd1c5efd2859784afe22fc14b30ca302e77bb0c659c0ae03c25b1c3e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:15:13.977724 containerd[1726]: time="2025-07-07T06:15:13.976819948Z" level=info msg="Container b951cdbd09811e46355ac81b600800d0126ec9ee03f9125d324f5463732de655: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:14.014277 containerd[1726]: time="2025-07-07T06:15:14.014256589Z" level=info msg="CreateContainer within sandbox \"b8e420e7dd1c5efd2859784afe22fc14b30ca302e77bb0c659c0ae03c25b1c3e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b951cdbd09811e46355ac81b600800d0126ec9ee03f9125d324f5463732de655\"" Jul 7 06:15:14.014683 containerd[1726]: time="2025-07-07T06:15:14.014562481Z" level=info msg="StartContainer for \"b951cdbd09811e46355ac81b600800d0126ec9ee03f9125d324f5463732de655\"" Jul 7 06:15:14.016022 containerd[1726]: time="2025-07-07T06:15:14.015986767Z" level=info msg="connecting to shim b951cdbd09811e46355ac81b600800d0126ec9ee03f9125d324f5463732de655" address="unix:///run/containerd/s/d563630c667d9c96a28dac8b04c3558a263a5166387867b601ac51e2c363f496" protocol=ttrpc version=3 Jul 7 06:15:14.032890 systemd[1]: Started cri-containerd-b951cdbd09811e46355ac81b600800d0126ec9ee03f9125d324f5463732de655.scope - libcontainer container b951cdbd09811e46355ac81b600800d0126ec9ee03f9125d324f5463732de655. Jul 7 06:15:14.064131 containerd[1726]: time="2025-07-07T06:15:14.064110723Z" level=info msg="StartContainer for \"b951cdbd09811e46355ac81b600800d0126ec9ee03f9125d324f5463732de655\" returns successfully" Jul 7 06:15:14.066374 systemd[1]: cri-containerd-b951cdbd09811e46355ac81b600800d0126ec9ee03f9125d324f5463732de655.scope: Deactivated successfully. Jul 7 06:15:14.069074 containerd[1726]: time="2025-07-07T06:15:14.069050467Z" level=info msg="received exit event container_id:\"b951cdbd09811e46355ac81b600800d0126ec9ee03f9125d324f5463732de655\" id:\"b951cdbd09811e46355ac81b600800d0126ec9ee03f9125d324f5463732de655\" pid:3823 exited_at:{seconds:1751868914 nanos:68787803}" Jul 7 06:15:14.069198 containerd[1726]: time="2025-07-07T06:15:14.069148291Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b951cdbd09811e46355ac81b600800d0126ec9ee03f9125d324f5463732de655\" id:\"b951cdbd09811e46355ac81b600800d0126ec9ee03f9125d324f5463732de655\" pid:3823 exited_at:{seconds:1751868914 nanos:68787803}" Jul 7 06:15:14.084669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b951cdbd09811e46355ac81b600800d0126ec9ee03f9125d324f5463732de655-rootfs.mount: Deactivated successfully. Jul 7 06:15:14.155958 kubelet[3158]: I0707 06:15:14.155901 3158 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:15:14.185825 kubelet[3158]: I0707 06:15:14.185781 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cd4dff845-9cc2l" podStartSLOduration=3.467570868 podStartE2EDuration="6.185769012s" podCreationTimestamp="2025-07-07 06:15:08 +0000 UTC" firstStartedPulling="2025-07-07 06:15:09.477088089 +0000 UTC m=+22.466327044" lastFinishedPulling="2025-07-07 06:15:12.195286243 +0000 UTC m=+25.184525188" observedRunningTime="2025-07-07 06:15:13.230100746 +0000 UTC m=+26.219339704" watchObservedRunningTime="2025-07-07 06:15:14.185769012 +0000 UTC m=+27.175008053" Jul 7 06:15:15.081346 kubelet[3158]: E0707 06:15:15.080523 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xpqzl" podUID="231c5af8-3370-4b3f-ab8c-8299c58a8f69" Jul 7 06:15:17.080594 kubelet[3158]: E0707 06:15:17.080268 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xpqzl" podUID="231c5af8-3370-4b3f-ab8c-8299c58a8f69" Jul 7 06:15:17.164264 containerd[1726]: time="2025-07-07T06:15:17.164061525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:15:19.080735 kubelet[3158]: E0707 06:15:19.080334 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xpqzl" podUID="231c5af8-3370-4b3f-ab8c-8299c58a8f69" Jul 7 06:15:21.081251 kubelet[3158]: E0707 06:15:21.080217 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xpqzl" podUID="231c5af8-3370-4b3f-ab8c-8299c58a8f69" Jul 7 06:15:23.080739 kubelet[3158]: E0707 06:15:23.079994 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xpqzl" podUID="231c5af8-3370-4b3f-ab8c-8299c58a8f69" Jul 7 06:15:24.466505 containerd[1726]: time="2025-07-07T06:15:24.466467237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:24.469725 containerd[1726]: time="2025-07-07T06:15:24.469590119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 06:15:24.514387 containerd[1726]: time="2025-07-07T06:15:24.514353777Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:24.561566 containerd[1726]: time="2025-07-07T06:15:24.561507934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:24.562203 containerd[1726]: time="2025-07-07T06:15:24.562121831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 7.398025342s" Jul 7 06:15:24.562203 containerd[1726]: time="2025-07-07T06:15:24.562147551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 06:15:24.564095 containerd[1726]: time="2025-07-07T06:15:24.563853550Z" level=info msg="CreateContainer within sandbox \"b8e420e7dd1c5efd2859784afe22fc14b30ca302e77bb0c659c0ae03c25b1c3e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:15:24.771315 containerd[1726]: time="2025-07-07T06:15:24.769808659Z" level=info msg="Container de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:24.869203 containerd[1726]: time="2025-07-07T06:15:24.869181676Z" level=info msg="CreateContainer within sandbox \"b8e420e7dd1c5efd2859784afe22fc14b30ca302e77bb0c659c0ae03c25b1c3e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422\"" Jul 7 06:15:24.870447 containerd[1726]: time="2025-07-07T06:15:24.869531907Z" level=info msg="StartContainer for \"de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422\"" Jul 7 06:15:24.870895 containerd[1726]: time="2025-07-07T06:15:24.870872631Z" level=info msg="connecting to shim de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422" address="unix:///run/containerd/s/d563630c667d9c96a28dac8b04c3558a263a5166387867b601ac51e2c363f496" protocol=ttrpc version=3 Jul 7 06:15:24.885881 systemd[1]: Started cri-containerd-de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422.scope - libcontainer container de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422. Jul 7 06:15:24.915878 containerd[1726]: time="2025-07-07T06:15:24.915853434Z" level=info msg="StartContainer for \"de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422\" returns successfully" Jul 7 06:15:25.081779 kubelet[3158]: E0707 06:15:25.080842 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xpqzl" podUID="231c5af8-3370-4b3f-ab8c-8299c58a8f69" Jul 7 06:15:26.216322 kubelet[3158]: I0707 06:15:26.216248 3158 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:15:27.080978 kubelet[3158]: E0707 06:15:27.080681 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xpqzl" podUID="231c5af8-3370-4b3f-ab8c-8299c58a8f69" Jul 7 06:15:29.080981 kubelet[3158]: E0707 06:15:29.080181 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xpqzl" podUID="231c5af8-3370-4b3f-ab8c-8299c58a8f69" Jul 7 06:15:31.081223 kubelet[3158]: E0707 06:15:31.080840 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xpqzl" podUID="231c5af8-3370-4b3f-ab8c-8299c58a8f69" Jul 7 06:15:32.432293 systemd[1]: cri-containerd-de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422.scope: Deactivated successfully. Jul 7 06:15:32.432549 systemd[1]: cri-containerd-de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422.scope: Consumed 368ms CPU time, 190.3M memory peak, 171.2M written to disk. Jul 7 06:15:32.434031 containerd[1726]: time="2025-07-07T06:15:32.433967272Z" level=info msg="received exit event container_id:\"de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422\" id:\"de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422\" pid:3886 exited_at:{seconds:1751868932 nanos:433656271}" Jul 7 06:15:32.434562 containerd[1726]: time="2025-07-07T06:15:32.434529876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422\" id:\"de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422\" pid:3886 exited_at:{seconds:1751868932 nanos:433656271}" Jul 7 06:15:32.434955 kubelet[3158]: I0707 06:15:32.434928 3158 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 06:15:32.457047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de296405c1239790f386ebed5a6ff44936b423b33695bdfbca2d9a6f2fd1d422-rootfs.mount: Deactivated successfully. Jul 7 06:15:32.511847 systemd[1]: Created slice kubepods-burstable-pod8493a410_5652_4168_b38b_c86eb164b3f7.slice - libcontainer container kubepods-burstable-pod8493a410_5652_4168_b38b_c86eb164b3f7.slice. Jul 7 06:15:32.562731 systemd[1]: Created slice kubepods-burstable-podc309951e_4538_4e38_9e2d_da07ad208ca7.slice - libcontainer container kubepods-burstable-podc309951e_4538_4e38_9e2d_da07ad208ca7.slice. Jul 7 06:15:32.571037 systemd[1]: Created slice kubepods-besteffort-podd06071ec_2755_4e17_84e0_19c69fd399e6.slice - libcontainer container kubepods-besteffort-podd06071ec_2755_4e17_84e0_19c69fd399e6.slice. Jul 7 06:15:32.575021 systemd[1]: Created slice kubepods-besteffort-podfae38f98_d440_4845_94b6_2cd7545ba6a7.slice - libcontainer container kubepods-besteffort-podfae38f98_d440_4845_94b6_2cd7545ba6a7.slice. Jul 7 06:15:32.576276 kubelet[3158]: I0707 06:15:32.576258 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5p46\" (UniqueName: \"kubernetes.io/projected/c1cf1a75-75e3-4759-a756-65fb7c708ccd-kube-api-access-d5p46\") pod \"calico-apiserver-564c97b774-7z2jl\" (UID: \"c1cf1a75-75e3-4759-a756-65fb7c708ccd\") " pod="calico-apiserver/calico-apiserver-564c97b774-7z2jl" Jul 7 06:15:32.576341 kubelet[3158]: I0707 06:15:32.576288 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d06071ec-2755-4e17-84e0-19c69fd399e6-whisker-ca-bundle\") pod \"whisker-5dfbff9bd6-5xb4n\" (UID: \"d06071ec-2755-4e17-84e0-19c69fd399e6\") " pod="calico-system/whisker-5dfbff9bd6-5xb4n" Jul 7 06:15:32.576341 kubelet[3158]: I0707 06:15:32.576331 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/977c4d92-213a-49c0-b436-ff30c61708a2-goldmane-key-pair\") pod \"goldmane-58fd7646b9-8prv9\" (UID: \"977c4d92-213a-49c0-b436-ff30c61708a2\") " pod="calico-system/goldmane-58fd7646b9-8prv9" Jul 7 06:15:32.576389 kubelet[3158]: I0707 06:15:32.576346 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk9vn\" (UniqueName: \"kubernetes.io/projected/977c4d92-213a-49c0-b436-ff30c61708a2-kube-api-access-rk9vn\") pod \"goldmane-58fd7646b9-8prv9\" (UID: \"977c4d92-213a-49c0-b436-ff30c61708a2\") " pod="calico-system/goldmane-58fd7646b9-8prv9" Jul 7 06:15:32.576389 kubelet[3158]: I0707 06:15:32.576364 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pdzc\" (UniqueName: \"kubernetes.io/projected/fae38f98-d440-4845-94b6-2cd7545ba6a7-kube-api-access-5pdzc\") pod \"calico-kube-controllers-cbcdccb6b-29z7l\" (UID: \"fae38f98-d440-4845-94b6-2cd7545ba6a7\") " pod="calico-system/calico-kube-controllers-cbcdccb6b-29z7l" Jul 7 06:15:32.576436 kubelet[3158]: I0707 06:15:32.576407 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8493a410-5652-4168-b38b-c86eb164b3f7-config-volume\") pod \"coredns-7c65d6cfc9-zmsvc\" (UID: \"8493a410-5652-4168-b38b-c86eb164b3f7\") " pod="kube-system/coredns-7c65d6cfc9-zmsvc" Jul 7 06:15:32.576436 kubelet[3158]: I0707 06:15:32.576423 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ace5f14-39da-487a-8159-8a8fa4e985ab-calico-apiserver-certs\") pod \"calico-apiserver-564c97b774-4l6xr\" (UID: \"0ace5f14-39da-487a-8159-8a8fa4e985ab\") " pod="calico-apiserver/calico-apiserver-564c97b774-4l6xr" Jul 7 06:15:32.576479 kubelet[3158]: I0707 06:15:32.576439 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxf9t\" (UniqueName: \"kubernetes.io/projected/d06071ec-2755-4e17-84e0-19c69fd399e6-kube-api-access-vxf9t\") pod \"whisker-5dfbff9bd6-5xb4n\" (UID: \"d06071ec-2755-4e17-84e0-19c69fd399e6\") " pod="calico-system/whisker-5dfbff9bd6-5xb4n" Jul 7 06:15:32.576479 kubelet[3158]: I0707 06:15:32.576476 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4d7\" (UniqueName: \"kubernetes.io/projected/8493a410-5652-4168-b38b-c86eb164b3f7-kube-api-access-vs4d7\") pod \"coredns-7c65d6cfc9-zmsvc\" (UID: \"8493a410-5652-4168-b38b-c86eb164b3f7\") " pod="kube-system/coredns-7c65d6cfc9-zmsvc" Jul 7 06:15:32.576523 kubelet[3158]: I0707 06:15:32.576494 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/977c4d92-213a-49c0-b436-ff30c61708a2-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-8prv9\" (UID: \"977c4d92-213a-49c0-b436-ff30c61708a2\") " pod="calico-system/goldmane-58fd7646b9-8prv9" Jul 7 06:15:32.576523 kubelet[3158]: I0707 06:15:32.576511 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fae38f98-d440-4845-94b6-2cd7545ba6a7-tigera-ca-bundle\") pod \"calico-kube-controllers-cbcdccb6b-29z7l\" (UID: \"fae38f98-d440-4845-94b6-2cd7545ba6a7\") " pod="calico-system/calico-kube-controllers-cbcdccb6b-29z7l" Jul 7 06:15:32.576764 kubelet[3158]: I0707 06:15:32.576558 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c1cf1a75-75e3-4759-a756-65fb7c708ccd-calico-apiserver-certs\") pod \"calico-apiserver-564c97b774-7z2jl\" (UID: \"c1cf1a75-75e3-4759-a756-65fb7c708ccd\") " pod="calico-apiserver/calico-apiserver-564c97b774-7z2jl" Jul 7 06:15:32.576764 kubelet[3158]: I0707 06:15:32.576575 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977c4d92-213a-49c0-b436-ff30c61708a2-config\") pod \"goldmane-58fd7646b9-8prv9\" (UID: \"977c4d92-213a-49c0-b436-ff30c61708a2\") " pod="calico-system/goldmane-58fd7646b9-8prv9" Jul 7 06:15:32.576764 kubelet[3158]: I0707 06:15:32.576594 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5npn\" (UniqueName: \"kubernetes.io/projected/0ace5f14-39da-487a-8159-8a8fa4e985ab-kube-api-access-b5npn\") pod \"calico-apiserver-564c97b774-4l6xr\" (UID: \"0ace5f14-39da-487a-8159-8a8fa4e985ab\") " pod="calico-apiserver/calico-apiserver-564c97b774-4l6xr" Jul 7 06:15:32.576764 kubelet[3158]: I0707 06:15:32.576643 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c309951e-4538-4e38-9e2d-da07ad208ca7-config-volume\") pod \"coredns-7c65d6cfc9-vmkr5\" (UID: \"c309951e-4538-4e38-9e2d-da07ad208ca7\") " pod="kube-system/coredns-7c65d6cfc9-vmkr5" Jul 7 06:15:32.576764 kubelet[3158]: I0707 06:15:32.576658 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbpgr\" (UniqueName: \"kubernetes.io/projected/c309951e-4538-4e38-9e2d-da07ad208ca7-kube-api-access-lbpgr\") pod \"coredns-7c65d6cfc9-vmkr5\" (UID: \"c309951e-4538-4e38-9e2d-da07ad208ca7\") " pod="kube-system/coredns-7c65d6cfc9-vmkr5" Jul 7 06:15:32.576926 kubelet[3158]: I0707 06:15:32.576726 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d06071ec-2755-4e17-84e0-19c69fd399e6-whisker-backend-key-pair\") pod \"whisker-5dfbff9bd6-5xb4n\" (UID: \"d06071ec-2755-4e17-84e0-19c69fd399e6\") " pod="calico-system/whisker-5dfbff9bd6-5xb4n" Jul 7 06:15:32.578270 systemd[1]: Created slice kubepods-besteffort-pod0ace5f14_39da_487a_8159_8a8fa4e985ab.slice - libcontainer container kubepods-besteffort-pod0ace5f14_39da_487a_8159_8a8fa4e985ab.slice. Jul 7 06:15:32.582624 systemd[1]: Created slice kubepods-besteffort-podc1cf1a75_75e3_4759_a756_65fb7c708ccd.slice - libcontainer container kubepods-besteffort-podc1cf1a75_75e3_4759_a756_65fb7c708ccd.slice. Jul 7 06:15:32.585571 systemd[1]: Created slice kubepods-besteffort-pod977c4d92_213a_49c0_b436_ff30c61708a2.slice - libcontainer container kubepods-besteffort-pod977c4d92_213a_49c0_b436_ff30c61708a2.slice. Jul 7 06:15:32.814984 containerd[1726]: time="2025-07-07T06:15:32.814942607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zmsvc,Uid:8493a410-5652-4168-b38b-c86eb164b3f7,Namespace:kube-system,Attempt:0,}" Jul 7 06:15:32.867974 containerd[1726]: time="2025-07-07T06:15:32.867841857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vmkr5,Uid:c309951e-4538-4e38-9e2d-da07ad208ca7,Namespace:kube-system,Attempt:0,}" Jul 7 06:15:32.875914 containerd[1726]: time="2025-07-07T06:15:32.875845014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dfbff9bd6-5xb4n,Uid:d06071ec-2755-4e17-84e0-19c69fd399e6,Namespace:calico-system,Attempt:0,}" Jul 7 06:15:32.880344 containerd[1726]: time="2025-07-07T06:15:32.880321159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cbcdccb6b-29z7l,Uid:fae38f98-d440-4845-94b6-2cd7545ba6a7,Namespace:calico-system,Attempt:0,}" Jul 7 06:15:32.881874 containerd[1726]: time="2025-07-07T06:15:32.881852571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c97b774-4l6xr,Uid:0ace5f14-39da-487a-8159-8a8fa4e985ab,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:15:32.885371 containerd[1726]: time="2025-07-07T06:15:32.885332063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c97b774-7z2jl,Uid:c1cf1a75-75e3-4759-a756-65fb7c708ccd,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:15:32.887848 containerd[1726]: time="2025-07-07T06:15:32.887796856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-8prv9,Uid:977c4d92-213a-49c0-b436-ff30c61708a2,Namespace:calico-system,Attempt:0,}" Jul 7 06:15:33.085273 systemd[1]: Created slice kubepods-besteffort-pod231c5af8_3370_4b3f_ab8c_8299c58a8f69.slice - libcontainer container kubepods-besteffort-pod231c5af8_3370_4b3f_ab8c_8299c58a8f69.slice. Jul 7 06:15:33.086952 containerd[1726]: time="2025-07-07T06:15:33.086932048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xpqzl,Uid:231c5af8-3370-4b3f-ab8c-8299c58a8f69,Namespace:calico-system,Attempt:0,}" Jul 7 06:15:35.794721 containerd[1726]: time="2025-07-07T06:15:35.794271087Z" level=error msg="Failed to destroy network for sandbox \"4e478c810bc638c87e6d5f6b1b3022841c160f0b77d04f1f33317e36d414c2d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.799926 containerd[1726]: time="2025-07-07T06:15:35.799885822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vmkr5,Uid:c309951e-4538-4e38-9e2d-da07ad208ca7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e478c810bc638c87e6d5f6b1b3022841c160f0b77d04f1f33317e36d414c2d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.800270 kubelet[3158]: E0707 06:15:35.800138 3158 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e478c810bc638c87e6d5f6b1b3022841c160f0b77d04f1f33317e36d414c2d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.800537 kubelet[3158]: E0707 06:15:35.800302 3158 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e478c810bc638c87e6d5f6b1b3022841c160f0b77d04f1f33317e36d414c2d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vmkr5" Jul 7 06:15:35.800537 kubelet[3158]: E0707 06:15:35.800331 3158 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e478c810bc638c87e6d5f6b1b3022841c160f0b77d04f1f33317e36d414c2d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vmkr5" Jul 7 06:15:35.800537 kubelet[3158]: E0707 06:15:35.800420 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vmkr5_kube-system(c309951e-4538-4e38-9e2d-da07ad208ca7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vmkr5_kube-system(c309951e-4538-4e38-9e2d-da07ad208ca7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e478c810bc638c87e6d5f6b1b3022841c160f0b77d04f1f33317e36d414c2d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vmkr5" podUID="c309951e-4538-4e38-9e2d-da07ad208ca7" Jul 7 06:15:35.805949 containerd[1726]: time="2025-07-07T06:15:35.805759112Z" level=error msg="Failed to destroy network for sandbox \"f05ea58adcdffa6a6f16a3092fa7d073956124985f1421b6ed7b78ac8cb7fe3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.816862 containerd[1726]: time="2025-07-07T06:15:35.816827203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zmsvc,Uid:8493a410-5652-4168-b38b-c86eb164b3f7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05ea58adcdffa6a6f16a3092fa7d073956124985f1421b6ed7b78ac8cb7fe3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.817523 kubelet[3158]: E0707 06:15:35.817494 3158 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05ea58adcdffa6a6f16a3092fa7d073956124985f1421b6ed7b78ac8cb7fe3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.817598 kubelet[3158]: E0707 06:15:35.817540 3158 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05ea58adcdffa6a6f16a3092fa7d073956124985f1421b6ed7b78ac8cb7fe3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zmsvc" Jul 7 06:15:35.817598 kubelet[3158]: E0707 06:15:35.817558 3158 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05ea58adcdffa6a6f16a3092fa7d073956124985f1421b6ed7b78ac8cb7fe3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zmsvc" Jul 7 06:15:35.817648 kubelet[3158]: E0707 06:15:35.817592 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-zmsvc_kube-system(8493a410-5652-4168-b38b-c86eb164b3f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-zmsvc_kube-system(8493a410-5652-4168-b38b-c86eb164b3f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f05ea58adcdffa6a6f16a3092fa7d073956124985f1421b6ed7b78ac8cb7fe3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zmsvc" podUID="8493a410-5652-4168-b38b-c86eb164b3f7" Jul 7 06:15:35.826353 containerd[1726]: time="2025-07-07T06:15:35.826322284Z" level=error msg="Failed to destroy network for sandbox \"5b92e7d5ed1cb93431172c07b5572b01e2f92316bc6474a4e8ca00ff384e4a48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.836081 containerd[1726]: time="2025-07-07T06:15:35.836006902Z" level=error msg="Failed to destroy network for sandbox \"96cf33b75099b5cefb4aa08d919fe9c27ccba802dfcb9ebe25aac8a7d370ca8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.837408 containerd[1726]: time="2025-07-07T06:15:35.837328077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cbcdccb6b-29z7l,Uid:fae38f98-d440-4845-94b6-2cd7545ba6a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b92e7d5ed1cb93431172c07b5572b01e2f92316bc6474a4e8ca00ff384e4a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.838352 kubelet[3158]: E0707 06:15:35.838090 3158 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b92e7d5ed1cb93431172c07b5572b01e2f92316bc6474a4e8ca00ff384e4a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.838352 kubelet[3158]: E0707 06:15:35.838142 3158 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b92e7d5ed1cb93431172c07b5572b01e2f92316bc6474a4e8ca00ff384e4a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cbcdccb6b-29z7l" Jul 7 06:15:35.838352 kubelet[3158]: E0707 06:15:35.838160 3158 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b92e7d5ed1cb93431172c07b5572b01e2f92316bc6474a4e8ca00ff384e4a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cbcdccb6b-29z7l" Jul 7 06:15:35.838482 kubelet[3158]: E0707 06:15:35.838194 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cbcdccb6b-29z7l_calico-system(fae38f98-d440-4845-94b6-2cd7545ba6a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cbcdccb6b-29z7l_calico-system(fae38f98-d440-4845-94b6-2cd7545ba6a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b92e7d5ed1cb93431172c07b5572b01e2f92316bc6474a4e8ca00ff384e4a48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cbcdccb6b-29z7l" podUID="fae38f98-d440-4845-94b6-2cd7545ba6a7" Jul 7 06:15:35.842037 containerd[1726]: time="2025-07-07T06:15:35.841963859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dfbff9bd6-5xb4n,Uid:d06071ec-2755-4e17-84e0-19c69fd399e6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96cf33b75099b5cefb4aa08d919fe9c27ccba802dfcb9ebe25aac8a7d370ca8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.842435 kubelet[3158]: E0707 06:15:35.842190 3158 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96cf33b75099b5cefb4aa08d919fe9c27ccba802dfcb9ebe25aac8a7d370ca8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.842435 kubelet[3158]: E0707 06:15:35.842232 3158 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96cf33b75099b5cefb4aa08d919fe9c27ccba802dfcb9ebe25aac8a7d370ca8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dfbff9bd6-5xb4n" Jul 7 06:15:35.842435 kubelet[3158]: E0707 06:15:35.842249 3158 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96cf33b75099b5cefb4aa08d919fe9c27ccba802dfcb9ebe25aac8a7d370ca8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dfbff9bd6-5xb4n" Jul 7 06:15:35.842536 kubelet[3158]: E0707 06:15:35.842285 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dfbff9bd6-5xb4n_calico-system(d06071ec-2755-4e17-84e0-19c69fd399e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dfbff9bd6-5xb4n_calico-system(d06071ec-2755-4e17-84e0-19c69fd399e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96cf33b75099b5cefb4aa08d919fe9c27ccba802dfcb9ebe25aac8a7d370ca8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dfbff9bd6-5xb4n" podUID="d06071ec-2755-4e17-84e0-19c69fd399e6" Jul 7 06:15:35.842814 containerd[1726]: time="2025-07-07T06:15:35.842790066Z" level=error msg="Failed to destroy network for sandbox \"5e16bc269ec4406a96a9ddb047138c501352879a255f5872e5d71927d9e3c24a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.848225 containerd[1726]: time="2025-07-07T06:15:35.847877780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c97b774-4l6xr,Uid:0ace5f14-39da-487a-8159-8a8fa4e985ab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e16bc269ec4406a96a9ddb047138c501352879a255f5872e5d71927d9e3c24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.848585 kubelet[3158]: E0707 06:15:35.848352 3158 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e16bc269ec4406a96a9ddb047138c501352879a255f5872e5d71927d9e3c24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.848585 kubelet[3158]: E0707 06:15:35.848387 3158 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e16bc269ec4406a96a9ddb047138c501352879a255f5872e5d71927d9e3c24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c97b774-4l6xr" Jul 7 06:15:35.848585 kubelet[3158]: E0707 06:15:35.848517 3158 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e16bc269ec4406a96a9ddb047138c501352879a255f5872e5d71927d9e3c24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c97b774-4l6xr" Jul 7 06:15:35.848944 kubelet[3158]: E0707 06:15:35.848558 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564c97b774-4l6xr_calico-apiserver(0ace5f14-39da-487a-8159-8a8fa4e985ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564c97b774-4l6xr_calico-apiserver(0ace5f14-39da-487a-8159-8a8fa4e985ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e16bc269ec4406a96a9ddb047138c501352879a255f5872e5d71927d9e3c24a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564c97b774-4l6xr" podUID="0ace5f14-39da-487a-8159-8a8fa4e985ab" Jul 7 06:15:35.871045 containerd[1726]: time="2025-07-07T06:15:35.871020070Z" level=error msg="Failed to destroy network for sandbox \"d63270e9f3a62ec0fedaf95d4de5e530df01bfef69cc2497f6be593609508932\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.877125 containerd[1726]: time="2025-07-07T06:15:35.877092718Z" level=error msg="Failed to destroy network for sandbox \"ae99d7463bfb54de9b9f821555d4e06a6233a5368f51430e9f5a122caecf24b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.878268 containerd[1726]: time="2025-07-07T06:15:35.878236034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xpqzl,Uid:231c5af8-3370-4b3f-ab8c-8299c58a8f69,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63270e9f3a62ec0fedaf95d4de5e530df01bfef69cc2497f6be593609508932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.878587 kubelet[3158]: E0707 06:15:35.878555 3158 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63270e9f3a62ec0fedaf95d4de5e530df01bfef69cc2497f6be593609508932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.878714 kubelet[3158]: E0707 06:15:35.878673 3158 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63270e9f3a62ec0fedaf95d4de5e530df01bfef69cc2497f6be593609508932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xpqzl" Jul 7 06:15:35.878813 kubelet[3158]: E0707 06:15:35.878695 3158 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63270e9f3a62ec0fedaf95d4de5e530df01bfef69cc2497f6be593609508932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xpqzl" Jul 7 06:15:35.878813 kubelet[3158]: E0707 06:15:35.878787 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xpqzl_calico-system(231c5af8-3370-4b3f-ab8c-8299c58a8f69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xpqzl_calico-system(231c5af8-3370-4b3f-ab8c-8299c58a8f69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d63270e9f3a62ec0fedaf95d4de5e530df01bfef69cc2497f6be593609508932\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xpqzl" podUID="231c5af8-3370-4b3f-ab8c-8299c58a8f69" Jul 7 06:15:35.880142 containerd[1726]: time="2025-07-07T06:15:35.880118452Z" level=error msg="Failed to destroy network for sandbox \"8fc4aaa47c3dae2dd5f094dcfe400e452634a8ae17e3466fb0be431a022b8e9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.882692 containerd[1726]: time="2025-07-07T06:15:35.882634826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c97b774-7z2jl,Uid:c1cf1a75-75e3-4759-a756-65fb7c708ccd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae99d7463bfb54de9b9f821555d4e06a6233a5368f51430e9f5a122caecf24b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.882880 kubelet[3158]: E0707 06:15:35.882848 3158 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae99d7463bfb54de9b9f821555d4e06a6233a5368f51430e9f5a122caecf24b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.882931 kubelet[3158]: E0707 06:15:35.882879 3158 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae99d7463bfb54de9b9f821555d4e06a6233a5368f51430e9f5a122caecf24b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c97b774-7z2jl" Jul 7 06:15:35.882931 kubelet[3158]: E0707 06:15:35.882899 3158 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae99d7463bfb54de9b9f821555d4e06a6233a5368f51430e9f5a122caecf24b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c97b774-7z2jl" Jul 7 06:15:35.882985 kubelet[3158]: E0707 06:15:35.882932 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564c97b774-7z2jl_calico-apiserver(c1cf1a75-75e3-4759-a756-65fb7c708ccd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564c97b774-7z2jl_calico-apiserver(c1cf1a75-75e3-4759-a756-65fb7c708ccd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae99d7463bfb54de9b9f821555d4e06a6233a5368f51430e9f5a122caecf24b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564c97b774-7z2jl" podUID="c1cf1a75-75e3-4759-a756-65fb7c708ccd" Jul 7 06:15:35.889026 containerd[1726]: time="2025-07-07T06:15:35.888992166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-8prv9,Uid:977c4d92-213a-49c0-b436-ff30c61708a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fc4aaa47c3dae2dd5f094dcfe400e452634a8ae17e3466fb0be431a022b8e9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.889152 kubelet[3158]: E0707 06:15:35.889126 3158 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fc4aaa47c3dae2dd5f094dcfe400e452634a8ae17e3466fb0be431a022b8e9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:15:35.889188 kubelet[3158]: E0707 06:15:35.889157 3158 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fc4aaa47c3dae2dd5f094dcfe400e452634a8ae17e3466fb0be431a022b8e9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-8prv9" Jul 7 06:15:35.889188 kubelet[3158]: E0707 06:15:35.889174 3158 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fc4aaa47c3dae2dd5f094dcfe400e452634a8ae17e3466fb0be431a022b8e9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-8prv9" Jul 7 06:15:35.889236 kubelet[3158]: E0707 06:15:35.889203 3158 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-8prv9_calico-system(977c4d92-213a-49c0-b436-ff30c61708a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-8prv9_calico-system(977c4d92-213a-49c0-b436-ff30c61708a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fc4aaa47c3dae2dd5f094dcfe400e452634a8ae17e3466fb0be431a022b8e9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-8prv9" podUID="977c4d92-213a-49c0-b436-ff30c61708a2" Jul 7 06:15:36.004962 systemd[1]: run-netns-cni\x2d6d517a4f\x2d8b41\x2d2d32\x2da5a3\x2dba7d6832ff4f.mount: Deactivated successfully. Jul 7 06:15:36.005042 systemd[1]: run-netns-cni\x2db96f8503\x2d851d\x2dcba3\x2dc39b\x2d7c1014e11404.mount: Deactivated successfully. Jul 7 06:15:36.005087 systemd[1]: run-netns-cni\x2d6c8afb78\x2d9653\x2d7d7b\x2d78b1\x2d6216d0c2d3a0.mount: Deactivated successfully. Jul 7 06:15:36.005129 systemd[1]: run-netns-cni\x2d87a9a9b9\x2d100c\x2d37e4\x2d2c88\x2d440e555c605e.mount: Deactivated successfully. Jul 7 06:15:36.005167 systemd[1]: run-netns-cni\x2d7aad9553\x2d6e5b\x2dc370\x2d84f0\x2d656466195df8.mount: Deactivated successfully. Jul 7 06:15:36.191903 containerd[1726]: time="2025-07-07T06:15:36.191717960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:15:42.671532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1265818136.mount: Deactivated successfully. Jul 7 06:15:42.715306 containerd[1726]: time="2025-07-07T06:15:42.715257447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:42.719448 containerd[1726]: time="2025-07-07T06:15:42.719422864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 06:15:42.726117 containerd[1726]: time="2025-07-07T06:15:42.726086238Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:42.731407 containerd[1726]: time="2025-07-07T06:15:42.731343058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:42.732145 containerd[1726]: time="2025-07-07T06:15:42.732120381Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.540360814s" Jul 7 06:15:42.732328 containerd[1726]: time="2025-07-07T06:15:42.732253659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 06:15:42.744278 containerd[1726]: time="2025-07-07T06:15:42.744213423Z" level=info msg="CreateContainer within sandbox \"b8e420e7dd1c5efd2859784afe22fc14b30ca302e77bb0c659c0ae03c25b1c3e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:15:42.764894 containerd[1726]: time="2025-07-07T06:15:42.764872677Z" level=info msg="Container 80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:42.788234 containerd[1726]: time="2025-07-07T06:15:42.788210008Z" level=info msg="CreateContainer within sandbox \"b8e420e7dd1c5efd2859784afe22fc14b30ca302e77bb0c659c0ae03c25b1c3e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df\"" Jul 7 06:15:42.788825 containerd[1726]: time="2025-07-07T06:15:42.788794137Z" level=info msg="StartContainer for \"80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df\"" Jul 7 06:15:42.790177 containerd[1726]: time="2025-07-07T06:15:42.790151067Z" level=info msg="connecting to shim 80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df" address="unix:///run/containerd/s/d563630c667d9c96a28dac8b04c3558a263a5166387867b601ac51e2c363f496" protocol=ttrpc version=3 Jul 7 06:15:42.811834 systemd[1]: Started cri-containerd-80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df.scope - libcontainer container 80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df. Jul 7 06:15:42.840936 containerd[1726]: time="2025-07-07T06:15:42.840914041Z" level=info msg="StartContainer for \"80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df\" returns successfully" Jul 7 06:15:43.062377 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:15:43.062497 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:15:43.235288 kubelet[3158]: I0707 06:15:43.235225 3158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxf9t\" (UniqueName: \"kubernetes.io/projected/d06071ec-2755-4e17-84e0-19c69fd399e6-kube-api-access-vxf9t\") pod \"d06071ec-2755-4e17-84e0-19c69fd399e6\" (UID: \"d06071ec-2755-4e17-84e0-19c69fd399e6\") " Jul 7 06:15:43.238552 kubelet[3158]: I0707 06:15:43.237973 3158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d06071ec-2755-4e17-84e0-19c69fd399e6-whisker-backend-key-pair\") pod \"d06071ec-2755-4e17-84e0-19c69fd399e6\" (UID: \"d06071ec-2755-4e17-84e0-19c69fd399e6\") " Jul 7 06:15:43.238552 kubelet[3158]: I0707 06:15:43.238010 3158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d06071ec-2755-4e17-84e0-19c69fd399e6-whisker-ca-bundle\") pod \"d06071ec-2755-4e17-84e0-19c69fd399e6\" (UID: \"d06071ec-2755-4e17-84e0-19c69fd399e6\") " Jul 7 06:15:43.240034 kubelet[3158]: I0707 06:15:43.240008 3158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d06071ec-2755-4e17-84e0-19c69fd399e6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d06071ec-2755-4e17-84e0-19c69fd399e6" (UID: "d06071ec-2755-4e17-84e0-19c69fd399e6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 06:15:43.243993 kubelet[3158]: I0707 06:15:43.243970 3158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d06071ec-2755-4e17-84e0-19c69fd399e6-kube-api-access-vxf9t" (OuterVolumeSpecName: "kube-api-access-vxf9t") pod "d06071ec-2755-4e17-84e0-19c69fd399e6" (UID: "d06071ec-2755-4e17-84e0-19c69fd399e6"). InnerVolumeSpecName "kube-api-access-vxf9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:15:43.246573 kubelet[3158]: I0707 06:15:43.246526 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v5gcp" podStartSLOduration=1.642313916 podStartE2EDuration="34.246512403s" podCreationTimestamp="2025-07-07 06:15:09 +0000 UTC" firstStartedPulling="2025-07-07 06:15:10.128689448 +0000 UTC m=+23.117928403" lastFinishedPulling="2025-07-07 06:15:42.732887929 +0000 UTC m=+55.722126890" observedRunningTime="2025-07-07 06:15:43.24435276 +0000 UTC m=+56.233591726" watchObservedRunningTime="2025-07-07 06:15:43.246512403 +0000 UTC m=+56.235751470" Jul 7 06:15:43.248560 kubelet[3158]: I0707 06:15:43.248533 3158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06071ec-2755-4e17-84e0-19c69fd399e6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d06071ec-2755-4e17-84e0-19c69fd399e6" (UID: "d06071ec-2755-4e17-84e0-19c69fd399e6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 06:15:43.307126 containerd[1726]: time="2025-07-07T06:15:43.307073208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df\" id:\"12fcf55514cb7fdf44319d3e3f6ee114b86db395575960148b049b0d64db3600\" pid:4210 exit_status:1 exited_at:{seconds:1751868943 nanos:306743039}" Jul 7 06:15:43.338640 kubelet[3158]: I0707 06:15:43.338578 3158 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d06071ec-2755-4e17-84e0-19c69fd399e6-whisker-ca-bundle\") on node \"ci-4372.0.1-a-04b45ab1a6\" DevicePath \"\"" Jul 7 06:15:43.338640 kubelet[3158]: I0707 06:15:43.338596 3158 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxf9t\" (UniqueName: \"kubernetes.io/projected/d06071ec-2755-4e17-84e0-19c69fd399e6-kube-api-access-vxf9t\") on node \"ci-4372.0.1-a-04b45ab1a6\" DevicePath \"\"" Jul 7 06:15:43.338640 kubelet[3158]: I0707 06:15:43.338605 3158 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d06071ec-2755-4e17-84e0-19c69fd399e6-whisker-backend-key-pair\") on node \"ci-4372.0.1-a-04b45ab1a6\" DevicePath \"\"" Jul 7 06:15:43.512845 systemd[1]: Removed slice kubepods-besteffort-podd06071ec_2755_4e17_84e0_19c69fd399e6.slice - libcontainer container kubepods-besteffort-podd06071ec_2755_4e17_84e0_19c69fd399e6.slice. Jul 7 06:15:43.596881 systemd[1]: Created slice kubepods-besteffort-pod83ee94e0_1878_4e75_baa3_051f7bc1d26a.slice - libcontainer container kubepods-besteffort-pod83ee94e0_1878_4e75_baa3_051f7bc1d26a.slice. Jul 7 06:15:43.639359 kubelet[3158]: I0707 06:15:43.639334 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83ee94e0-1878-4e75-baa3-051f7bc1d26a-whisker-ca-bundle\") pod \"whisker-7bd47b4b74-vfflx\" (UID: \"83ee94e0-1878-4e75-baa3-051f7bc1d26a\") " pod="calico-system/whisker-7bd47b4b74-vfflx" Jul 7 06:15:43.639359 kubelet[3158]: I0707 06:15:43.639361 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd7xp\" (UniqueName: \"kubernetes.io/projected/83ee94e0-1878-4e75-baa3-051f7bc1d26a-kube-api-access-bd7xp\") pod \"whisker-7bd47b4b74-vfflx\" (UID: \"83ee94e0-1878-4e75-baa3-051f7bc1d26a\") " pod="calico-system/whisker-7bd47b4b74-vfflx" Jul 7 06:15:43.639439 kubelet[3158]: I0707 06:15:43.639378 3158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/83ee94e0-1878-4e75-baa3-051f7bc1d26a-whisker-backend-key-pair\") pod \"whisker-7bd47b4b74-vfflx\" (UID: \"83ee94e0-1878-4e75-baa3-051f7bc1d26a\") " pod="calico-system/whisker-7bd47b4b74-vfflx" Jul 7 06:15:43.672244 systemd[1]: var-lib-kubelet-pods-d06071ec\x2d2755\x2d4e17\x2d84e0\x2d19c69fd399e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvxf9t.mount: Deactivated successfully. Jul 7 06:15:43.675274 systemd[1]: var-lib-kubelet-pods-d06071ec\x2d2755\x2d4e17\x2d84e0\x2d19c69fd399e6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:15:43.900959 containerd[1726]: time="2025-07-07T06:15:43.900632300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd47b4b74-vfflx,Uid:83ee94e0-1878-4e75-baa3-051f7bc1d26a,Namespace:calico-system,Attempt:0,}" Jul 7 06:15:44.008063 systemd-networkd[1357]: califd26ad9d882: Link UP Jul 7 06:15:44.008179 systemd-networkd[1357]: califd26ad9d882: Gained carrier Jul 7 06:15:44.019834 containerd[1726]: 2025-07-07 06:15:43.927 [INFO][4237] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:15:44.019834 containerd[1726]: 2025-07-07 06:15:43.938 [INFO][4237] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-eth0 whisker-7bd47b4b74- calico-system 83ee94e0-1878-4e75-baa3-051f7bc1d26a 942 0 2025-07-07 06:15:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bd47b4b74 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.0.1-a-04b45ab1a6 whisker-7bd47b4b74-vfflx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califd26ad9d882 [] [] }} ContainerID="ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" Namespace="calico-system" Pod="whisker-7bd47b4b74-vfflx" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-" Jul 7 06:15:44.019834 containerd[1726]: 2025-07-07 06:15:43.938 [INFO][4237] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" Namespace="calico-system" Pod="whisker-7bd47b4b74-vfflx" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-eth0" Jul 7 06:15:44.019834 containerd[1726]: 2025-07-07 06:15:43.957 [INFO][4249] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" HandleID="k8s-pod-network.ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-eth0" Jul 7 06:15:44.020019 containerd[1726]: 2025-07-07 06:15:43.957 [INFO][4249] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" HandleID="k8s-pod-network.ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-04b45ab1a6", "pod":"whisker-7bd47b4b74-vfflx", "timestamp":"2025-07-07 06:15:43.957109115 +0000 UTC"}, Hostname:"ci-4372.0.1-a-04b45ab1a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:15:44.020019 containerd[1726]: 2025-07-07 06:15:43.957 [INFO][4249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:15:44.020019 containerd[1726]: 2025-07-07 06:15:43.957 [INFO][4249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:15:44.020019 containerd[1726]: 2025-07-07 06:15:43.957 [INFO][4249] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-04b45ab1a6' Jul 7 06:15:44.020019 containerd[1726]: 2025-07-07 06:15:43.961 [INFO][4249] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:44.020019 containerd[1726]: 2025-07-07 06:15:43.964 [INFO][4249] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:44.020019 containerd[1726]: 2025-07-07 06:15:43.967 [INFO][4249] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:44.020019 containerd[1726]: 2025-07-07 06:15:43.969 [INFO][4249] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:44.020019 containerd[1726]: 2025-07-07 06:15:43.970 [INFO][4249] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:44.020255 containerd[1726]: 2025-07-07 06:15:43.970 [INFO][4249] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:44.020255 containerd[1726]: 2025-07-07 06:15:43.971 [INFO][4249] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a Jul 7 06:15:44.020255 containerd[1726]: 2025-07-07 06:15:43.977 [INFO][4249] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:44.020255 containerd[1726]: 2025-07-07 06:15:43.984 [INFO][4249] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.193/26] block=192.168.60.192/26 handle="k8s-pod-network.ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:44.020255 containerd[1726]: 2025-07-07 06:15:43.984 [INFO][4249] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.193/26] handle="k8s-pod-network.ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:44.020255 containerd[1726]: 2025-07-07 06:15:43.984 [INFO][4249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:15:44.020255 containerd[1726]: 2025-07-07 06:15:43.984 [INFO][4249] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.193/26] IPv6=[] ContainerID="ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" HandleID="k8s-pod-network.ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-eth0" Jul 7 06:15:44.020412 containerd[1726]: 2025-07-07 06:15:43.986 [INFO][4237] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" Namespace="calico-system" Pod="whisker-7bd47b4b74-vfflx" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-eth0", GenerateName:"whisker-7bd47b4b74-", Namespace:"calico-system", SelfLink:"", UID:"83ee94e0-1878-4e75-baa3-051f7bc1d26a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 15, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bd47b4b74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"", Pod:"whisker-7bd47b4b74-vfflx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.60.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califd26ad9d882", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:44.020412 containerd[1726]: 2025-07-07 06:15:43.987 [INFO][4237] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.193/32] ContainerID="ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" Namespace="calico-system" Pod="whisker-7bd47b4b74-vfflx" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-eth0" Jul 7 06:15:44.020514 containerd[1726]: 2025-07-07 06:15:43.987 [INFO][4237] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd26ad9d882 ContainerID="ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" Namespace="calico-system" Pod="whisker-7bd47b4b74-vfflx" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-eth0" Jul 7 06:15:44.020514 containerd[1726]: 2025-07-07 06:15:44.007 [INFO][4237] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" Namespace="calico-system" Pod="whisker-7bd47b4b74-vfflx" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-eth0" Jul 7 06:15:44.020574 containerd[1726]: 2025-07-07 06:15:44.007 [INFO][4237] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" Namespace="calico-system" Pod="whisker-7bd47b4b74-vfflx" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-eth0", GenerateName:"whisker-7bd47b4b74-", Namespace:"calico-system", SelfLink:"", UID:"83ee94e0-1878-4e75-baa3-051f7bc1d26a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 15, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bd47b4b74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a", Pod:"whisker-7bd47b4b74-vfflx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.60.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califd26ad9d882", MAC:"36:95:81:3a:70:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:44.020637 containerd[1726]: 2025-07-07 06:15:44.018 [INFO][4237] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" Namespace="calico-system" Pod="whisker-7bd47b4b74-vfflx" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-whisker--7bd47b4b74--vfflx-eth0" Jul 7 06:15:44.068497 containerd[1726]: time="2025-07-07T06:15:44.068463235Z" level=info msg="connecting to shim ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a" address="unix:///run/containerd/s/db2f83e58beac0119e5257bb88a7e76f0c0492162eb2b5cdc516b678d429c12d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:44.084837 systemd[1]: Started cri-containerd-ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a.scope - libcontainer container ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a. Jul 7 06:15:44.120464 containerd[1726]: time="2025-07-07T06:15:44.120438549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd47b4b74-vfflx,Uid:83ee94e0-1878-4e75-baa3-051f7bc1d26a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a\"" Jul 7 06:15:44.121681 containerd[1726]: time="2025-07-07T06:15:44.121612527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:15:44.266921 containerd[1726]: time="2025-07-07T06:15:44.266889843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df\" id:\"38f3c69f28a41e6a5efdbaf45abef3e4082f54191dbb092144470646b2f4e68a\" pid:4320 exit_status:1 exited_at:{seconds:1751868944 nanos:266683647}" Jul 7 06:15:44.893097 systemd-networkd[1357]: vxlan.calico: Link UP Jul 7 06:15:44.893226 systemd-networkd[1357]: vxlan.calico: Gained carrier Jul 7 06:15:45.082176 kubelet[3158]: I0707 06:15:45.082148 3158 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d06071ec-2755-4e17-84e0-19c69fd399e6" path="/var/lib/kubelet/pods/d06071ec-2755-4e17-84e0-19c69fd399e6/volumes" Jul 7 06:15:45.375810 systemd-networkd[1357]: califd26ad9d882: Gained IPv6LL Jul 7 06:15:45.784174 containerd[1726]: time="2025-07-07T06:15:45.784141609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:45.787559 containerd[1726]: time="2025-07-07T06:15:45.787525308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 06:15:45.791226 containerd[1726]: time="2025-07-07T06:15:45.791174984Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:45.795614 containerd[1726]: time="2025-07-07T06:15:45.795576134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:45.796242 containerd[1726]: time="2025-07-07T06:15:45.795938416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.674301465s" Jul 7 06:15:45.796242 containerd[1726]: time="2025-07-07T06:15:45.795963242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 06:15:45.797482 containerd[1726]: time="2025-07-07T06:15:45.797457205Z" level=info msg="CreateContainer within sandbox \"ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:15:45.826902 containerd[1726]: time="2025-07-07T06:15:45.826878768Z" level=info msg="Container 763becfe5646eb48071bf7c8276d50e3516f9dabde98db29af1acd6ba9f6557b: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:45.831468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702470130.mount: Deactivated successfully. Jul 7 06:15:45.856188 containerd[1726]: time="2025-07-07T06:15:45.856164127Z" level=info msg="CreateContainer within sandbox \"ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"763becfe5646eb48071bf7c8276d50e3516f9dabde98db29af1acd6ba9f6557b\"" Jul 7 06:15:45.856629 containerd[1726]: time="2025-07-07T06:15:45.856523422Z" level=info msg="StartContainer for \"763becfe5646eb48071bf7c8276d50e3516f9dabde98db29af1acd6ba9f6557b\"" Jul 7 06:15:45.857483 containerd[1726]: time="2025-07-07T06:15:45.857438747Z" level=info msg="connecting to shim 763becfe5646eb48071bf7c8276d50e3516f9dabde98db29af1acd6ba9f6557b" address="unix:///run/containerd/s/db2f83e58beac0119e5257bb88a7e76f0c0492162eb2b5cdc516b678d429c12d" protocol=ttrpc version=3 Jul 7 06:15:45.874947 systemd[1]: Started cri-containerd-763becfe5646eb48071bf7c8276d50e3516f9dabde98db29af1acd6ba9f6557b.scope - libcontainer container 763becfe5646eb48071bf7c8276d50e3516f9dabde98db29af1acd6ba9f6557b. Jul 7 06:15:45.913508 containerd[1726]: time="2025-07-07T06:15:45.913477808Z" level=info msg="StartContainer for \"763becfe5646eb48071bf7c8276d50e3516f9dabde98db29af1acd6ba9f6557b\" returns successfully" Jul 7 06:15:45.914915 containerd[1726]: time="2025-07-07T06:15:45.914897287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:15:46.080681 containerd[1726]: time="2025-07-07T06:15:46.080554722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zmsvc,Uid:8493a410-5652-4168-b38b-c86eb164b3f7,Namespace:kube-system,Attempt:0,}" Jul 7 06:15:46.080878 containerd[1726]: time="2025-07-07T06:15:46.080554699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vmkr5,Uid:c309951e-4538-4e38-9e2d-da07ad208ca7,Namespace:kube-system,Attempt:0,}" Jul 7 06:15:46.207239 systemd-networkd[1357]: cali25f32e3f287: Link UP Jul 7 06:15:46.208934 systemd-networkd[1357]: cali25f32e3f287: Gained carrier Jul 7 06:15:46.231657 containerd[1726]: 2025-07-07 06:15:46.135 [INFO][4567] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-eth0 coredns-7c65d6cfc9- kube-system c309951e-4538-4e38-9e2d-da07ad208ca7 863 0 2025-07-07 06:14:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-a-04b45ab1a6 coredns-7c65d6cfc9-vmkr5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali25f32e3f287 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vmkr5" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-" Jul 7 06:15:46.231657 containerd[1726]: 2025-07-07 06:15:46.135 [INFO][4567] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vmkr5" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-eth0" Jul 7 06:15:46.231657 containerd[1726]: 2025-07-07 06:15:46.157 [INFO][4586] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" HandleID="k8s-pod-network.b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-eth0" Jul 7 06:15:46.231824 containerd[1726]: 2025-07-07 06:15:46.157 [INFO][4586] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" HandleID="k8s-pod-network.b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-a-04b45ab1a6", "pod":"coredns-7c65d6cfc9-vmkr5", "timestamp":"2025-07-07 06:15:46.157281344 +0000 UTC"}, Hostname:"ci-4372.0.1-a-04b45ab1a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:15:46.231824 containerd[1726]: 2025-07-07 06:15:46.157 [INFO][4586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:15:46.231824 containerd[1726]: 2025-07-07 06:15:46.157 [INFO][4586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:15:46.231824 containerd[1726]: 2025-07-07 06:15:46.157 [INFO][4586] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-04b45ab1a6' Jul 7 06:15:46.231824 containerd[1726]: 2025-07-07 06:15:46.161 [INFO][4586] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.231824 containerd[1726]: 2025-07-07 06:15:46.164 [INFO][4586] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.231824 containerd[1726]: 2025-07-07 06:15:46.167 [INFO][4586] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.231824 containerd[1726]: 2025-07-07 06:15:46.168 [INFO][4586] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.231824 containerd[1726]: 2025-07-07 06:15:46.170 [INFO][4586] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.232006 containerd[1726]: 2025-07-07 06:15:46.170 [INFO][4586] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.232006 containerd[1726]: 2025-07-07 06:15:46.172 [INFO][4586] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385 Jul 7 06:15:46.232006 containerd[1726]: 2025-07-07 06:15:46.181 [INFO][4586] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.232006 containerd[1726]: 2025-07-07 06:15:46.196 [INFO][4586] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.194/26] block=192.168.60.192/26 handle="k8s-pod-network.b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.232006 containerd[1726]: 2025-07-07 06:15:46.196 [INFO][4586] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.194/26] handle="k8s-pod-network.b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.232006 containerd[1726]: 2025-07-07 06:15:46.196 [INFO][4586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:15:46.232006 containerd[1726]: 2025-07-07 06:15:46.196 [INFO][4586] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.194/26] IPv6=[] ContainerID="b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" HandleID="k8s-pod-network.b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-eth0" Jul 7 06:15:46.232134 containerd[1726]: 2025-07-07 06:15:46.198 [INFO][4567] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vmkr5" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c309951e-4538-4e38-9e2d-da07ad208ca7", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"", Pod:"coredns-7c65d6cfc9-vmkr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25f32e3f287", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:46.232134 containerd[1726]: 2025-07-07 06:15:46.205 [INFO][4567] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.194/32] ContainerID="b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vmkr5" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-eth0" Jul 7 06:15:46.232134 containerd[1726]: 2025-07-07 06:15:46.205 [INFO][4567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25f32e3f287 ContainerID="b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vmkr5" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-eth0" Jul 7 06:15:46.232134 containerd[1726]: 2025-07-07 06:15:46.206 [INFO][4567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vmkr5" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-eth0" Jul 7 06:15:46.232134 containerd[1726]: 2025-07-07 06:15:46.207 [INFO][4567] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vmkr5" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c309951e-4538-4e38-9e2d-da07ad208ca7", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385", Pod:"coredns-7c65d6cfc9-vmkr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25f32e3f287", MAC:"7a:e4:70:6a:5d:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:46.232134 containerd[1726]: 2025-07-07 06:15:46.229 [INFO][4567] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vmkr5" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--vmkr5-eth0" Jul 7 06:15:46.285424 containerd[1726]: time="2025-07-07T06:15:46.285368233Z" level=info msg="connecting to shim b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385" address="unix:///run/containerd/s/016ddecad9c3dffba0ad97814fb804d84f6cddbb283b3fd8de1b97b876199e2f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:46.295014 systemd-networkd[1357]: cali838a0c467cf: Link UP Jul 7 06:15:46.295177 systemd-networkd[1357]: cali838a0c467cf: Gained carrier Jul 7 06:15:46.310007 systemd[1]: Started cri-containerd-b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385.scope - libcontainer container b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385. Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.130 [INFO][4557] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-eth0 coredns-7c65d6cfc9- kube-system 8493a410-5652-4168-b38b-c86eb164b3f7 860 0 2025-07-07 06:14:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-a-04b45ab1a6 coredns-7c65d6cfc9-zmsvc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali838a0c467cf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zmsvc" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-" Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.130 [INFO][4557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zmsvc" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-eth0" Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.157 [INFO][4581] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" HandleID="k8s-pod-network.af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-eth0" Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.157 [INFO][4581] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" HandleID="k8s-pod-network.af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5270), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-a-04b45ab1a6", "pod":"coredns-7c65d6cfc9-zmsvc", "timestamp":"2025-07-07 06:15:46.157202389 +0000 UTC"}, Hostname:"ci-4372.0.1-a-04b45ab1a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.157 [INFO][4581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.196 [INFO][4581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.196 [INFO][4581] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-04b45ab1a6' Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.262 [INFO][4581] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.266 [INFO][4581] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.269 [INFO][4581] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.270 [INFO][4581] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.273 [INFO][4581] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.273 [INFO][4581] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.274 [INFO][4581] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.278 [INFO][4581] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.288 [INFO][4581] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.195/26] block=192.168.60.192/26 handle="k8s-pod-network.af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.288 [INFO][4581] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.195/26] handle="k8s-pod-network.af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.289 [INFO][4581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:15:46.314142 containerd[1726]: 2025-07-07 06:15:46.289 [INFO][4581] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.195/26] IPv6=[] ContainerID="af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" HandleID="k8s-pod-network.af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-eth0" Jul 7 06:15:46.314618 containerd[1726]: 2025-07-07 06:15:46.290 [INFO][4557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zmsvc" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8493a410-5652-4168-b38b-c86eb164b3f7", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"", Pod:"coredns-7c65d6cfc9-zmsvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali838a0c467cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:46.314618 containerd[1726]: 2025-07-07 06:15:46.291 [INFO][4557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.195/32] ContainerID="af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zmsvc" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-eth0" Jul 7 06:15:46.314618 containerd[1726]: 2025-07-07 06:15:46.291 [INFO][4557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali838a0c467cf ContainerID="af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zmsvc" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-eth0" Jul 7 06:15:46.314618 containerd[1726]: 2025-07-07 06:15:46.294 [INFO][4557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zmsvc" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-eth0" Jul 7 06:15:46.314618 containerd[1726]: 2025-07-07 06:15:46.295 [INFO][4557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zmsvc" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8493a410-5652-4168-b38b-c86eb164b3f7", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d", Pod:"coredns-7c65d6cfc9-zmsvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali838a0c467cf", MAC:"46:27:48:01:3f:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:46.314618 containerd[1726]: 2025-07-07 06:15:46.311 [INFO][4557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zmsvc" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-coredns--7c65d6cfc9--zmsvc-eth0" Jul 7 06:15:46.368627 containerd[1726]: time="2025-07-07T06:15:46.368566776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vmkr5,Uid:c309951e-4538-4e38-9e2d-da07ad208ca7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385\"" Jul 7 06:15:46.372821 containerd[1726]: time="2025-07-07T06:15:46.372799694Z" level=info msg="CreateContainer within sandbox \"b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:15:46.410615 containerd[1726]: time="2025-07-07T06:15:46.410203315Z" level=info msg="connecting to shim af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d" address="unix:///run/containerd/s/5c1256f41015e2b64e90c0c5b46fc709f4cb23499e8fcfdc6007db129361754c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:46.414170 containerd[1726]: time="2025-07-07T06:15:46.414149192Z" level=info msg="Container 8bc70a5544cb67a736eec28401029ddbd260fbc960a99d0e2db5417dd642d94c: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:46.429849 systemd[1]: Started cri-containerd-af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d.scope - libcontainer container af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d. Jul 7 06:15:46.433584 containerd[1726]: time="2025-07-07T06:15:46.433562614Z" level=info msg="CreateContainer within sandbox \"b3b5868c18ecbd07cae8380da83e492cbed2a7eb0adf725e290e43768f472385\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8bc70a5544cb67a736eec28401029ddbd260fbc960a99d0e2db5417dd642d94c\"" Jul 7 06:15:46.434308 containerd[1726]: time="2025-07-07T06:15:46.434291538Z" level=info msg="StartContainer for \"8bc70a5544cb67a736eec28401029ddbd260fbc960a99d0e2db5417dd642d94c\"" Jul 7 06:15:46.435436 containerd[1726]: time="2025-07-07T06:15:46.435391228Z" level=info msg="connecting to shim 8bc70a5544cb67a736eec28401029ddbd260fbc960a99d0e2db5417dd642d94c" address="unix:///run/containerd/s/016ddecad9c3dffba0ad97814fb804d84f6cddbb283b3fd8de1b97b876199e2f" protocol=ttrpc version=3 Jul 7 06:15:46.453811 systemd[1]: Started cri-containerd-8bc70a5544cb67a736eec28401029ddbd260fbc960a99d0e2db5417dd642d94c.scope - libcontainer container 8bc70a5544cb67a736eec28401029ddbd260fbc960a99d0e2db5417dd642d94c. Jul 7 06:15:46.465141 systemd-networkd[1357]: vxlan.calico: Gained IPv6LL Jul 7 06:15:46.485834 containerd[1726]: time="2025-07-07T06:15:46.485810480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zmsvc,Uid:8493a410-5652-4168-b38b-c86eb164b3f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d\"" Jul 7 06:15:46.487744 containerd[1726]: time="2025-07-07T06:15:46.487688518Z" level=info msg="StartContainer for \"8bc70a5544cb67a736eec28401029ddbd260fbc960a99d0e2db5417dd642d94c\" returns successfully" Jul 7 06:15:46.489196 containerd[1726]: time="2025-07-07T06:15:46.489163973Z" level=info msg="CreateContainer within sandbox \"af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:15:46.539559 containerd[1726]: time="2025-07-07T06:15:46.539538650Z" level=info msg="Container 880952d48e6766be911d22c9bcdd156a929aed35c8e3770e988a0df711844004: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:46.568963 containerd[1726]: time="2025-07-07T06:15:46.568941253Z" level=info msg="CreateContainer within sandbox \"af5504eb7db8ef6088816f7188bede30d32f2385f660625887445c11fa27075d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"880952d48e6766be911d22c9bcdd156a929aed35c8e3770e988a0df711844004\"" Jul 7 06:15:46.569375 containerd[1726]: time="2025-07-07T06:15:46.569301044Z" level=info msg="StartContainer for \"880952d48e6766be911d22c9bcdd156a929aed35c8e3770e988a0df711844004\"" Jul 7 06:15:46.570155 containerd[1726]: time="2025-07-07T06:15:46.570128535Z" level=info msg="connecting to shim 880952d48e6766be911d22c9bcdd156a929aed35c8e3770e988a0df711844004" address="unix:///run/containerd/s/5c1256f41015e2b64e90c0c5b46fc709f4cb23499e8fcfdc6007db129361754c" protocol=ttrpc version=3 Jul 7 06:15:46.585849 systemd[1]: Started cri-containerd-880952d48e6766be911d22c9bcdd156a929aed35c8e3770e988a0df711844004.scope - libcontainer container 880952d48e6766be911d22c9bcdd156a929aed35c8e3770e988a0df711844004. Jul 7 06:15:46.609740 containerd[1726]: time="2025-07-07T06:15:46.609716542Z" level=info msg="StartContainer for \"880952d48e6766be911d22c9bcdd156a929aed35c8e3770e988a0df711844004\" returns successfully" Jul 7 06:15:47.281726 kubelet[3158]: I0707 06:15:47.280768 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zmsvc" podStartSLOduration=55.280752668 podStartE2EDuration="55.280752668s" podCreationTimestamp="2025-07-07 06:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:15:47.246035168 +0000 UTC m=+60.235274130" watchObservedRunningTime="2025-07-07 06:15:47.280752668 +0000 UTC m=+60.269991623" Jul 7 06:15:47.305990 kubelet[3158]: I0707 06:15:47.305932 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vmkr5" podStartSLOduration=55.305918703 podStartE2EDuration="55.305918703s" podCreationTimestamp="2025-07-07 06:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:15:47.281013683 +0000 UTC m=+60.270252640" watchObservedRunningTime="2025-07-07 06:15:47.305918703 +0000 UTC m=+60.295157657" Jul 7 06:15:47.999888 systemd-networkd[1357]: cali25f32e3f287: Gained IPv6LL Jul 7 06:15:48.080528 containerd[1726]: time="2025-07-07T06:15:48.080499878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-8prv9,Uid:977c4d92-213a-49c0-b436-ff30c61708a2,Namespace:calico-system,Attempt:0,}" Jul 7 06:15:48.204851 systemd-networkd[1357]: cali08137231297: Link UP Jul 7 06:15:48.205012 systemd-networkd[1357]: cali08137231297: Gained carrier Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.128 [INFO][4784] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-eth0 goldmane-58fd7646b9- calico-system 977c4d92-213a-49c0-b436-ff30c61708a2 869 0 2025-07-07 06:15:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.0.1-a-04b45ab1a6 goldmane-58fd7646b9-8prv9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali08137231297 [] [] }} ContainerID="18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" Namespace="calico-system" Pod="goldmane-58fd7646b9-8prv9" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-" Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.129 [INFO][4784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" Namespace="calico-system" Pod="goldmane-58fd7646b9-8prv9" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-eth0" Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.158 [INFO][4796] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" HandleID="k8s-pod-network.18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-eth0" Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.158 [INFO][4796] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" HandleID="k8s-pod-network.18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-04b45ab1a6", "pod":"goldmane-58fd7646b9-8prv9", "timestamp":"2025-07-07 06:15:48.158123099 +0000 UTC"}, Hostname:"ci-4372.0.1-a-04b45ab1a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.159 [INFO][4796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.159 [INFO][4796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.159 [INFO][4796] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-04b45ab1a6' Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.166 [INFO][4796] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.173 [INFO][4796] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.177 [INFO][4796] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.179 [INFO][4796] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.182 [INFO][4796] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.182 [INFO][4796] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.184 [INFO][4796] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946 Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.191 [INFO][4796] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.199 [INFO][4796] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.196/26] block=192.168.60.192/26 handle="k8s-pod-network.18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.199 [INFO][4796] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.196/26] handle="k8s-pod-network.18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.199 [INFO][4796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:15:48.221589 containerd[1726]: 2025-07-07 06:15:48.199 [INFO][4796] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.196/26] IPv6=[] ContainerID="18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" HandleID="k8s-pod-network.18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-eth0" Jul 7 06:15:48.222455 containerd[1726]: 2025-07-07 06:15:48.201 [INFO][4784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" Namespace="calico-system" Pod="goldmane-58fd7646b9-8prv9" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"977c4d92-213a-49c0-b436-ff30c61708a2", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"", Pod:"goldmane-58fd7646b9-8prv9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali08137231297", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:48.222455 containerd[1726]: 2025-07-07 06:15:48.201 [INFO][4784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.196/32] ContainerID="18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" Namespace="calico-system" Pod="goldmane-58fd7646b9-8prv9" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-eth0" Jul 7 06:15:48.222455 containerd[1726]: 2025-07-07 06:15:48.201 [INFO][4784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08137231297 ContainerID="18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" Namespace="calico-system" Pod="goldmane-58fd7646b9-8prv9" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-eth0" Jul 7 06:15:48.222455 containerd[1726]: 2025-07-07 06:15:48.204 [INFO][4784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" Namespace="calico-system" Pod="goldmane-58fd7646b9-8prv9" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-eth0" Jul 7 06:15:48.222455 containerd[1726]: 2025-07-07 06:15:48.205 [INFO][4784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" Namespace="calico-system" Pod="goldmane-58fd7646b9-8prv9" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"977c4d92-213a-49c0-b436-ff30c61708a2", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946", Pod:"goldmane-58fd7646b9-8prv9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali08137231297", MAC:"b6:bc:5c:cc:01:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:48.222455 containerd[1726]: 2025-07-07 06:15:48.220 [INFO][4784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" Namespace="calico-system" Pod="goldmane-58fd7646b9-8prv9" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-goldmane--58fd7646b9--8prv9-eth0" Jul 7 06:15:48.256095 systemd-networkd[1357]: cali838a0c467cf: Gained IPv6LL Jul 7 06:15:48.289723 containerd[1726]: time="2025-07-07T06:15:48.289290635Z" level=info msg="connecting to shim 18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946" address="unix:///run/containerd/s/d044b95f13b5a6515785b99c7d2f30b43903427e5f0a6a4f40faf02f1114d290" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:48.316903 systemd[1]: Started cri-containerd-18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946.scope - libcontainer container 18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946. Jul 7 06:15:48.342437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4113640684.mount: Deactivated successfully. Jul 7 06:15:48.363182 containerd[1726]: time="2025-07-07T06:15:48.363159809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-8prv9,Uid:977c4d92-213a-49c0-b436-ff30c61708a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946\"" Jul 7 06:15:48.418592 containerd[1726]: time="2025-07-07T06:15:48.418562650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:48.423051 containerd[1726]: time="2025-07-07T06:15:48.423024400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 06:15:48.431023 containerd[1726]: time="2025-07-07T06:15:48.430985220Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:48.438972 containerd[1726]: time="2025-07-07T06:15:48.438935846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:48.439399 containerd[1726]: time="2025-07-07T06:15:48.439308191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.524384968s" Jul 7 06:15:48.439399 containerd[1726]: time="2025-07-07T06:15:48.439334821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 06:15:48.440220 containerd[1726]: time="2025-07-07T06:15:48.440060744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:15:48.441087 containerd[1726]: time="2025-07-07T06:15:48.441065778Z" level=info msg="CreateContainer within sandbox \"ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:15:48.488166 containerd[1726]: time="2025-07-07T06:15:48.488145457Z" level=info msg="Container 4c3f53415a1ce85139ee81af5b87db9baea5e424efd725ed448a1a9af0a81cf4: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:48.507006 containerd[1726]: time="2025-07-07T06:15:48.506948420Z" level=info msg="CreateContainer within sandbox \"ed2a21b1149c7e74492a3b77674f91e2f4426524330060492c8f5d6580ac935a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4c3f53415a1ce85139ee81af5b87db9baea5e424efd725ed448a1a9af0a81cf4\"" Jul 7 06:15:48.507646 containerd[1726]: time="2025-07-07T06:15:48.507617525Z" level=info msg="StartContainer for \"4c3f53415a1ce85139ee81af5b87db9baea5e424efd725ed448a1a9af0a81cf4\"" Jul 7 06:15:48.508669 containerd[1726]: time="2025-07-07T06:15:48.508641573Z" level=info msg="connecting to shim 4c3f53415a1ce85139ee81af5b87db9baea5e424efd725ed448a1a9af0a81cf4" address="unix:///run/containerd/s/db2f83e58beac0119e5257bb88a7e76f0c0492162eb2b5cdc516b678d429c12d" protocol=ttrpc version=3 Jul 7 06:15:48.526847 systemd[1]: Started cri-containerd-4c3f53415a1ce85139ee81af5b87db9baea5e424efd725ed448a1a9af0a81cf4.scope - libcontainer container 4c3f53415a1ce85139ee81af5b87db9baea5e424efd725ed448a1a9af0a81cf4. Jul 7 06:15:48.566114 containerd[1726]: time="2025-07-07T06:15:48.566081286Z" level=info msg="StartContainer for \"4c3f53415a1ce85139ee81af5b87db9baea5e424efd725ed448a1a9af0a81cf4\" returns successfully" Jul 7 06:15:49.081604 containerd[1726]: time="2025-07-07T06:15:49.080813421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xpqzl,Uid:231c5af8-3370-4b3f-ab8c-8299c58a8f69,Namespace:calico-system,Attempt:0,}" Jul 7 06:15:49.081604 containerd[1726]: time="2025-07-07T06:15:49.080815601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c97b774-7z2jl,Uid:c1cf1a75-75e3-4759-a756-65fb7c708ccd,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:15:49.081604 containerd[1726]: time="2025-07-07T06:15:49.081535544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c97b774-4l6xr,Uid:0ace5f14-39da-487a-8159-8a8fa4e985ab,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:15:49.240210 systemd-networkd[1357]: calia62acdc26d8: Link UP Jul 7 06:15:49.240831 systemd-networkd[1357]: calia62acdc26d8: Gained carrier Jul 7 06:15:49.263297 kubelet[3158]: I0707 06:15:49.262426 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7bd47b4b74-vfflx" podStartSLOduration=1.943844817 podStartE2EDuration="6.262409536s" podCreationTimestamp="2025-07-07 06:15:43 +0000 UTC" firstStartedPulling="2025-07-07 06:15:44.121390085 +0000 UTC m=+57.110629035" lastFinishedPulling="2025-07-07 06:15:48.439954801 +0000 UTC m=+61.429193754" observedRunningTime="2025-07-07 06:15:49.248569662 +0000 UTC m=+62.237808621" watchObservedRunningTime="2025-07-07 06:15:49.262409536 +0000 UTC m=+62.251648496" Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.145 [INFO][4893] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-eth0 csi-node-driver- calico-system 231c5af8-3370-4b3f-ab8c-8299c58a8f69 725 0 2025-07-07 06:15:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.0.1-a-04b45ab1a6 csi-node-driver-xpqzl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia62acdc26d8 [] [] }} ContainerID="7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" Namespace="calico-system" Pod="csi-node-driver-xpqzl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-" Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.145 [INFO][4893] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" Namespace="calico-system" Pod="csi-node-driver-xpqzl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-eth0" Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.189 [INFO][4929] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" HandleID="k8s-pod-network.7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-eth0" Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.189 [INFO][4929] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" HandleID="k8s-pod-network.7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5230), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-04b45ab1a6", "pod":"csi-node-driver-xpqzl", "timestamp":"2025-07-07 06:15:49.189053035 +0000 UTC"}, Hostname:"ci-4372.0.1-a-04b45ab1a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.189 [INFO][4929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.189 [INFO][4929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.189 [INFO][4929] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-04b45ab1a6' Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.195 [INFO][4929] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.204 [INFO][4929] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.207 [INFO][4929] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.209 [INFO][4929] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.211 [INFO][4929] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.211 [INFO][4929] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.212 [INFO][4929] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283 Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.218 [INFO][4929] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.233 [INFO][4929] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.197/26] block=192.168.60.192/26 handle="k8s-pod-network.7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.233 [INFO][4929] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.197/26] handle="k8s-pod-network.7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.233 [INFO][4929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:15:49.263684 containerd[1726]: 2025-07-07 06:15:49.233 [INFO][4929] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.197/26] IPv6=[] ContainerID="7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" HandleID="k8s-pod-network.7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-eth0" Jul 7 06:15:49.264124 containerd[1726]: 2025-07-07 06:15:49.237 [INFO][4893] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" Namespace="calico-system" Pod="csi-node-driver-xpqzl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"231c5af8-3370-4b3f-ab8c-8299c58a8f69", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"", Pod:"csi-node-driver-xpqzl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia62acdc26d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:49.264124 containerd[1726]: 2025-07-07 06:15:49.237 [INFO][4893] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.197/32] ContainerID="7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" Namespace="calico-system" Pod="csi-node-driver-xpqzl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-eth0" Jul 7 06:15:49.264124 containerd[1726]: 2025-07-07 06:15:49.237 [INFO][4893] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia62acdc26d8 ContainerID="7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" Namespace="calico-system" Pod="csi-node-driver-xpqzl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-eth0" Jul 7 06:15:49.264124 containerd[1726]: 2025-07-07 06:15:49.241 [INFO][4893] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" Namespace="calico-system" Pod="csi-node-driver-xpqzl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-eth0" Jul 7 06:15:49.264124 containerd[1726]: 2025-07-07 06:15:49.241 [INFO][4893] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" Namespace="calico-system" Pod="csi-node-driver-xpqzl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"231c5af8-3370-4b3f-ab8c-8299c58a8f69", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283", Pod:"csi-node-driver-xpqzl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia62acdc26d8", MAC:"22:fb:d5:05:69:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:49.264124 containerd[1726]: 2025-07-07 06:15:49.261 [INFO][4893] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" Namespace="calico-system" Pod="csi-node-driver-xpqzl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-csi--node--driver--xpqzl-eth0" Jul 7 06:15:49.330059 systemd-networkd[1357]: cali50864bc1510: Link UP Jul 7 06:15:49.331167 systemd-networkd[1357]: cali50864bc1510: Gained carrier Jul 7 06:15:49.343804 systemd-networkd[1357]: cali08137231297: Gained IPv6LL Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.169 [INFO][4904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-eth0 calico-apiserver-564c97b774- calico-apiserver c1cf1a75-75e3-4759-a756-65fb7c708ccd 867 0 2025-07-07 06:15:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:564c97b774 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-a-04b45ab1a6 calico-apiserver-564c97b774-7z2jl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali50864bc1510 [] [] }} ContainerID="44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-7z2jl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-" Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.170 [INFO][4904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-7z2jl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-eth0" Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.204 [INFO][4938] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" HandleID="k8s-pod-network.44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-eth0" Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.204 [INFO][4938] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" HandleID="k8s-pod-network.44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5970), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-a-04b45ab1a6", "pod":"calico-apiserver-564c97b774-7z2jl", "timestamp":"2025-07-07 06:15:49.204071944 +0000 UTC"}, Hostname:"ci-4372.0.1-a-04b45ab1a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.204 [INFO][4938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.233 [INFO][4938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.233 [INFO][4938] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-04b45ab1a6' Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.295 [INFO][4938] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.302 [INFO][4938] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.307 [INFO][4938] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.308 [INFO][4938] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.309 [INFO][4938] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.309 [INFO][4938] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.310 [INFO][4938] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589 Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.315 [INFO][4938] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.324 [INFO][4938] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.198/26] block=192.168.60.192/26 handle="k8s-pod-network.44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.324 [INFO][4938] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.198/26] handle="k8s-pod-network.44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.324 [INFO][4938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:15:49.352227 containerd[1726]: 2025-07-07 06:15:49.324 [INFO][4938] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.198/26] IPv6=[] ContainerID="44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" HandleID="k8s-pod-network.44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-eth0" Jul 7 06:15:49.352998 containerd[1726]: 2025-07-07 06:15:49.326 [INFO][4904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-7z2jl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-eth0", GenerateName:"calico-apiserver-564c97b774-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1cf1a75-75e3-4759-a756-65fb7c708ccd", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564c97b774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"", Pod:"calico-apiserver-564c97b774-7z2jl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50864bc1510", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:49.352998 containerd[1726]: 2025-07-07 06:15:49.326 [INFO][4904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.198/32] ContainerID="44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-7z2jl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-eth0" Jul 7 06:15:49.352998 containerd[1726]: 2025-07-07 06:15:49.326 [INFO][4904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50864bc1510 ContainerID="44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-7z2jl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-eth0" Jul 7 06:15:49.352998 containerd[1726]: 2025-07-07 06:15:49.331 [INFO][4904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-7z2jl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-eth0" Jul 7 06:15:49.352998 containerd[1726]: 2025-07-07 06:15:49.332 [INFO][4904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-7z2jl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-eth0", GenerateName:"calico-apiserver-564c97b774-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1cf1a75-75e3-4759-a756-65fb7c708ccd", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564c97b774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589", Pod:"calico-apiserver-564c97b774-7z2jl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50864bc1510", MAC:"b2:37:47:11:63:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:49.352998 containerd[1726]: 2025-07-07 06:15:49.349 [INFO][4904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-7z2jl" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--7z2jl-eth0" Jul 7 06:15:49.366729 containerd[1726]: time="2025-07-07T06:15:49.366657055Z" level=info msg="connecting to shim 7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283" address="unix:///run/containerd/s/aae0116d3b0816a2976d261e8bae2e257367ee09d079c0c08c7b39a92dd612bd" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:49.385820 systemd[1]: Started cri-containerd-7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283.scope - libcontainer container 7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283. Jul 7 06:15:49.415254 containerd[1726]: time="2025-07-07T06:15:49.415082457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xpqzl,Uid:231c5af8-3370-4b3f-ab8c-8299c58a8f69,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283\"" Jul 7 06:15:49.419669 containerd[1726]: time="2025-07-07T06:15:49.419585750Z" level=info msg="connecting to shim 44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589" address="unix:///run/containerd/s/e3b2cf319716a541524bacdba7fbfb66e150b11995e47db92e11e054c4b5490f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:49.435977 systemd[1]: Started cri-containerd-44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589.scope - libcontainer container 44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589. Jul 7 06:15:49.452290 systemd-networkd[1357]: califd3ea237f19: Link UP Jul 7 06:15:49.453324 systemd-networkd[1357]: califd3ea237f19: Gained carrier Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.174 [INFO][4914] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-eth0 calico-apiserver-564c97b774- calico-apiserver 0ace5f14-39da-487a-8159-8a8fa4e985ab 868 0 2025-07-07 06:15:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:564c97b774 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-a-04b45ab1a6 calico-apiserver-564c97b774-4l6xr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califd3ea237f19 [] [] }} ContainerID="2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-4l6xr" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-" Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.174 [INFO][4914] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-4l6xr" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-eth0" Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.209 [INFO][4943] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" HandleID="k8s-pod-network.2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-eth0" Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.210 [INFO][4943] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" HandleID="k8s-pod-network.2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-a-04b45ab1a6", "pod":"calico-apiserver-564c97b774-4l6xr", "timestamp":"2025-07-07 06:15:49.209891853 +0000 UTC"}, Hostname:"ci-4372.0.1-a-04b45ab1a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.210 [INFO][4943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.324 [INFO][4943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.324 [INFO][4943] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-04b45ab1a6' Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.402 [INFO][4943] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.407 [INFO][4943] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.414 [INFO][4943] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.418 [INFO][4943] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.422 [INFO][4943] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.422 [INFO][4943] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.424 [INFO][4943] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.435 [INFO][4943] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.447 [INFO][4943] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.199/26] block=192.168.60.192/26 handle="k8s-pod-network.2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.447 [INFO][4943] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.199/26] handle="k8s-pod-network.2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.447 [INFO][4943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:15:49.480965 containerd[1726]: 2025-07-07 06:15:49.447 [INFO][4943] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.199/26] IPv6=[] ContainerID="2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" HandleID="k8s-pod-network.2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-eth0" Jul 7 06:15:49.481422 containerd[1726]: 2025-07-07 06:15:49.450 [INFO][4914] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-4l6xr" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-eth0", GenerateName:"calico-apiserver-564c97b774-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ace5f14-39da-487a-8159-8a8fa4e985ab", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564c97b774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"", Pod:"calico-apiserver-564c97b774-4l6xr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd3ea237f19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:49.481422 containerd[1726]: 2025-07-07 06:15:49.450 [INFO][4914] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.199/32] ContainerID="2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-4l6xr" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-eth0" Jul 7 06:15:49.481422 containerd[1726]: 2025-07-07 06:15:49.450 [INFO][4914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd3ea237f19 ContainerID="2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-4l6xr" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-eth0" Jul 7 06:15:49.481422 containerd[1726]: 2025-07-07 06:15:49.452 [INFO][4914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-4l6xr" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-eth0" Jul 7 06:15:49.481422 containerd[1726]: 2025-07-07 06:15:49.453 [INFO][4914] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-4l6xr" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-eth0", GenerateName:"calico-apiserver-564c97b774-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ace5f14-39da-487a-8159-8a8fa4e985ab", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564c97b774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf", Pod:"calico-apiserver-564c97b774-4l6xr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd3ea237f19", MAC:"2e:72:d5:b3:ef:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:49.481422 containerd[1726]: 2025-07-07 06:15:49.478 [INFO][4914] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" Namespace="calico-apiserver" Pod="calico-apiserver-564c97b774-4l6xr" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--apiserver--564c97b774--4l6xr-eth0" Jul 7 06:15:49.481607 containerd[1726]: time="2025-07-07T06:15:49.481474570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c97b774-7z2jl,Uid:c1cf1a75-75e3-4759-a756-65fb7c708ccd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589\"" Jul 7 06:15:49.534142 containerd[1726]: time="2025-07-07T06:15:49.534109793Z" level=info msg="connecting to shim 2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf" address="unix:///run/containerd/s/75afc48147c100f23889ae85257cf2320e737005391b1cc08ee50e9634baf422" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:49.551859 systemd[1]: Started cri-containerd-2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf.scope - libcontainer container 2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf. Jul 7 06:15:49.602082 containerd[1726]: time="2025-07-07T06:15:49.601898777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c97b774-4l6xr,Uid:0ace5f14-39da-487a-8159-8a8fa4e985ab,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf\"" Jul 7 06:15:50.495874 systemd-networkd[1357]: calia62acdc26d8: Gained IPv6LL Jul 7 06:15:50.815979 systemd-networkd[1357]: califd3ea237f19: Gained IPv6LL Jul 7 06:15:50.915969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1257079716.mount: Deactivated successfully. Jul 7 06:15:51.081213 containerd[1726]: time="2025-07-07T06:15:51.081138664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cbcdccb6b-29z7l,Uid:fae38f98-d440-4845-94b6-2cd7545ba6a7,Namespace:calico-system,Attempt:0,}" Jul 7 06:15:51.136104 systemd-networkd[1357]: cali50864bc1510: Gained IPv6LL Jul 7 06:15:51.210769 systemd-networkd[1357]: cali046996c198e: Link UP Jul 7 06:15:51.212087 systemd-networkd[1357]: cali046996c198e: Gained carrier Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.134 [INFO][5130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-eth0 calico-kube-controllers-cbcdccb6b- calico-system fae38f98-d440-4845-94b6-2cd7545ba6a7 870 0 2025-07-07 06:15:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cbcdccb6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.0.1-a-04b45ab1a6 calico-kube-controllers-cbcdccb6b-29z7l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali046996c198e [] [] }} ContainerID="d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" Namespace="calico-system" Pod="calico-kube-controllers-cbcdccb6b-29z7l" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-" Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.134 [INFO][5130] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" Namespace="calico-system" Pod="calico-kube-controllers-cbcdccb6b-29z7l" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-eth0" Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.164 [INFO][5141] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" HandleID="k8s-pod-network.d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-eth0" Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.164 [INFO][5141] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" HandleID="k8s-pod-network.d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5080), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-04b45ab1a6", "pod":"calico-kube-controllers-cbcdccb6b-29z7l", "timestamp":"2025-07-07 06:15:51.164217407 +0000 UTC"}, Hostname:"ci-4372.0.1-a-04b45ab1a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.164 [INFO][5141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.164 [INFO][5141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.164 [INFO][5141] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-04b45ab1a6' Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.172 [INFO][5141] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.176 [INFO][5141] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.180 [INFO][5141] ipam/ipam.go 511: Trying affinity for 192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.182 [INFO][5141] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.185 [INFO][5141] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.185 [INFO][5141] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.186 [INFO][5141] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924 Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.192 [INFO][5141] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.205 [INFO][5141] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.60.200/26] block=192.168.60.192/26 handle="k8s-pod-network.d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.205 [INFO][5141] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.200/26] handle="k8s-pod-network.d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" host="ci-4372.0.1-a-04b45ab1a6" Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.206 [INFO][5141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:15:51.237382 containerd[1726]: 2025-07-07 06:15:51.206 [INFO][5141] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.200/26] IPv6=[] ContainerID="d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" HandleID="k8s-pod-network.d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" Workload="ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-eth0" Jul 7 06:15:51.238204 containerd[1726]: 2025-07-07 06:15:51.207 [INFO][5130] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" Namespace="calico-system" Pod="calico-kube-controllers-cbcdccb6b-29z7l" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-eth0", GenerateName:"calico-kube-controllers-cbcdccb6b-", Namespace:"calico-system", SelfLink:"", UID:"fae38f98-d440-4845-94b6-2cd7545ba6a7", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cbcdccb6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"", Pod:"calico-kube-controllers-cbcdccb6b-29z7l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali046996c198e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:51.238204 containerd[1726]: 2025-07-07 06:15:51.207 [INFO][5130] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.200/32] ContainerID="d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" Namespace="calico-system" Pod="calico-kube-controllers-cbcdccb6b-29z7l" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-eth0" Jul 7 06:15:51.238204 containerd[1726]: 2025-07-07 06:15:51.207 [INFO][5130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali046996c198e ContainerID="d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" Namespace="calico-system" Pod="calico-kube-controllers-cbcdccb6b-29z7l" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-eth0" Jul 7 06:15:51.238204 containerd[1726]: 2025-07-07 06:15:51.211 [INFO][5130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" Namespace="calico-system" Pod="calico-kube-controllers-cbcdccb6b-29z7l" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-eth0" Jul 7 06:15:51.238204 containerd[1726]: 2025-07-07 06:15:51.213 [INFO][5130] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" Namespace="calico-system" Pod="calico-kube-controllers-cbcdccb6b-29z7l" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-eth0", GenerateName:"calico-kube-controllers-cbcdccb6b-", Namespace:"calico-system", SelfLink:"", UID:"fae38f98-d440-4845-94b6-2cd7545ba6a7", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cbcdccb6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-04b45ab1a6", ContainerID:"d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924", Pod:"calico-kube-controllers-cbcdccb6b-29z7l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali046996c198e", MAC:"42:ba:50:a4:03:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:15:51.238204 containerd[1726]: 2025-07-07 06:15:51.235 [INFO][5130] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" Namespace="calico-system" Pod="calico-kube-controllers-cbcdccb6b-29z7l" WorkloadEndpoint="ci--4372.0.1--a--04b45ab1a6-k8s-calico--kube--controllers--cbcdccb6b--29z7l-eth0" Jul 7 06:15:51.301105 containerd[1726]: time="2025-07-07T06:15:51.301070900Z" level=info msg="connecting to shim d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924" address="unix:///run/containerd/s/ca3484d94032fe7f1f9d1c32f91ad8bc8de6c34fa4d5c92ec3704fd5277eed7e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:51.331965 systemd[1]: Started cri-containerd-d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924.scope - libcontainer container d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924. Jul 7 06:15:51.380248 containerd[1726]: time="2025-07-07T06:15:51.380222852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cbcdccb6b-29z7l,Uid:fae38f98-d440-4845-94b6-2cd7545ba6a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924\"" Jul 7 06:15:51.505226 containerd[1726]: time="2025-07-07T06:15:51.505197381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:51.508660 containerd[1726]: time="2025-07-07T06:15:51.508619545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 06:15:51.514824 containerd[1726]: time="2025-07-07T06:15:51.514783469Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:51.522831 containerd[1726]: time="2025-07-07T06:15:51.522787557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:51.523343 containerd[1726]: time="2025-07-07T06:15:51.523265793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.083182046s" Jul 7 06:15:51.523343 containerd[1726]: time="2025-07-07T06:15:51.523289973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 06:15:51.524474 containerd[1726]: time="2025-07-07T06:15:51.524314532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:15:51.525282 containerd[1726]: time="2025-07-07T06:15:51.525257987Z" level=info msg="CreateContainer within sandbox \"18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:15:51.549054 containerd[1726]: time="2025-07-07T06:15:51.548238109Z" level=info msg="Container 87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:51.554607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1736638176.mount: Deactivated successfully. Jul 7 06:15:51.575316 containerd[1726]: time="2025-07-07T06:15:51.575292421Z" level=info msg="CreateContainer within sandbox \"18aa67e912cae23168bd95f6dde128458f687e0ee51debf889fb619662186946\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\"" Jul 7 06:15:51.576127 containerd[1726]: time="2025-07-07T06:15:51.576109490Z" level=info msg="StartContainer for \"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\"" Jul 7 06:15:51.577198 containerd[1726]: time="2025-07-07T06:15:51.577175348Z" level=info msg="connecting to shim 87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc" address="unix:///run/containerd/s/d044b95f13b5a6515785b99c7d2f30b43903427e5f0a6a4f40faf02f1114d290" protocol=ttrpc version=3 Jul 7 06:15:51.594841 systemd[1]: Started cri-containerd-87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc.scope - libcontainer container 87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc. Jul 7 06:15:51.637081 containerd[1726]: time="2025-07-07T06:15:51.637059437Z" level=info msg="StartContainer for \"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\" returns successfully" Jul 7 06:15:52.258211 kubelet[3158]: I0707 06:15:52.258162 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-8prv9" podStartSLOduration=41.098110301 podStartE2EDuration="44.258148155s" podCreationTimestamp="2025-07-07 06:15:08 +0000 UTC" firstStartedPulling="2025-07-07 06:15:48.363931386 +0000 UTC m=+61.353170335" lastFinishedPulling="2025-07-07 06:15:51.52396924 +0000 UTC m=+64.513208189" observedRunningTime="2025-07-07 06:15:52.257048565 +0000 UTC m=+65.246287523" watchObservedRunningTime="2025-07-07 06:15:52.258148155 +0000 UTC m=+65.247387118" Jul 7 06:15:52.300767 containerd[1726]: time="2025-07-07T06:15:52.300732584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\" id:\"2f6afcdd3825ffb80f4ace6b0f9e38eb95cd874acda51f71e56cfa7a0589b9b9\" pid:5254 exit_status:1 exited_at:{seconds:1751868952 nanos:300408486}" Jul 7 06:15:52.799817 systemd-networkd[1357]: cali046996c198e: Gained IPv6LL Jul 7 06:15:53.302462 containerd[1726]: time="2025-07-07T06:15:53.302414162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\" id:\"7b30711bb596a7a942e3be8cb86421625b1adfb7479285e1061b77b811435d0e\" pid:5280 exit_status:1 exited_at:{seconds:1751868953 nanos:302219303}" Jul 7 06:15:53.535209 containerd[1726]: time="2025-07-07T06:15:53.535183660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:53.538795 containerd[1726]: time="2025-07-07T06:15:53.538759872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 06:15:53.542107 containerd[1726]: time="2025-07-07T06:15:53.542068885Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:53.548232 containerd[1726]: time="2025-07-07T06:15:53.548194866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:53.548751 containerd[1726]: time="2025-07-07T06:15:53.548523332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.024183552s" Jul 7 06:15:53.548751 containerd[1726]: time="2025-07-07T06:15:53.548548447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 06:15:53.549334 containerd[1726]: time="2025-07-07T06:15:53.549317490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:15:53.550512 containerd[1726]: time="2025-07-07T06:15:53.550228371Z" level=info msg="CreateContainer within sandbox \"7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:15:53.583207 containerd[1726]: time="2025-07-07T06:15:53.582006535Z" level=info msg="Container 5f45173e01b625ebfc547cfcfccc8c8af71c240c893aae12ce7910068252090f: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:53.606832 containerd[1726]: time="2025-07-07T06:15:53.606810150Z" level=info msg="CreateContainer within sandbox \"7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5f45173e01b625ebfc547cfcfccc8c8af71c240c893aae12ce7910068252090f\"" Jul 7 06:15:53.607724 containerd[1726]: time="2025-07-07T06:15:53.607182258Z" level=info msg="StartContainer for \"5f45173e01b625ebfc547cfcfccc8c8af71c240c893aae12ce7910068252090f\"" Jul 7 06:15:53.608517 containerd[1726]: time="2025-07-07T06:15:53.608483084Z" level=info msg="connecting to shim 5f45173e01b625ebfc547cfcfccc8c8af71c240c893aae12ce7910068252090f" address="unix:///run/containerd/s/aae0116d3b0816a2976d261e8bae2e257367ee09d079c0c08c7b39a92dd612bd" protocol=ttrpc version=3 Jul 7 06:15:53.625879 systemd[1]: Started cri-containerd-5f45173e01b625ebfc547cfcfccc8c8af71c240c893aae12ce7910068252090f.scope - libcontainer container 5f45173e01b625ebfc547cfcfccc8c8af71c240c893aae12ce7910068252090f. Jul 7 06:15:53.654517 containerd[1726]: time="2025-07-07T06:15:53.654497865Z" level=info msg="StartContainer for \"5f45173e01b625ebfc547cfcfccc8c8af71c240c893aae12ce7910068252090f\" returns successfully" Jul 7 06:15:54.301759 containerd[1726]: time="2025-07-07T06:15:54.301736381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\" id:\"741f810fb94256125c5bfc6595acbd6d68b5563a6b7080a51af7923a6e30a4dc\" pid:5342 exit_status:1 exited_at:{seconds:1751868954 nanos:301529928}" Jul 7 06:15:54.477818 containerd[1726]: time="2025-07-07T06:15:54.477780778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\" id:\"b2a605675736fa8067d126ea31e0eb5dbddb13787146979683adfb5a0304f08e\" pid:5365 exited_at:{seconds:1751868954 nanos:477591172}" Jul 7 06:15:57.413881 containerd[1726]: time="2025-07-07T06:15:57.413840481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:57.669791 containerd[1726]: time="2025-07-07T06:15:57.669598223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 06:15:57.759380 containerd[1726]: time="2025-07-07T06:15:57.759318202Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:57.765369 containerd[1726]: time="2025-07-07T06:15:57.765324329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:57.765800 containerd[1726]: time="2025-07-07T06:15:57.765655832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.216238351s" Jul 7 06:15:57.765800 containerd[1726]: time="2025-07-07T06:15:57.765682835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:15:57.767360 containerd[1726]: time="2025-07-07T06:15:57.767203978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:15:57.768035 containerd[1726]: time="2025-07-07T06:15:57.767910455Z" level=info msg="CreateContainer within sandbox \"44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:15:57.794431 containerd[1726]: time="2025-07-07T06:15:57.794407090Z" level=info msg="Container c61110dcd3fda4821254bdff2defe2871c4794519a0f81a620346db34bb50048: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:57.817037 containerd[1726]: time="2025-07-07T06:15:57.817013966Z" level=info msg="CreateContainer within sandbox \"44cf7f5aeec33ef1c4692e030b2fb1fae8ff1be7dc65954b8592a0dd8fba4589\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c61110dcd3fda4821254bdff2defe2871c4794519a0f81a620346db34bb50048\"" Jul 7 06:15:57.817417 containerd[1726]: time="2025-07-07T06:15:57.817329949Z" level=info msg="StartContainer for \"c61110dcd3fda4821254bdff2defe2871c4794519a0f81a620346db34bb50048\"" Jul 7 06:15:57.818559 containerd[1726]: time="2025-07-07T06:15:57.818494779Z" level=info msg="connecting to shim c61110dcd3fda4821254bdff2defe2871c4794519a0f81a620346db34bb50048" address="unix:///run/containerd/s/e3b2cf319716a541524bacdba7fbfb66e150b11995e47db92e11e054c4b5490f" protocol=ttrpc version=3 Jul 7 06:15:57.836850 systemd[1]: Started cri-containerd-c61110dcd3fda4821254bdff2defe2871c4794519a0f81a620346db34bb50048.scope - libcontainer container c61110dcd3fda4821254bdff2defe2871c4794519a0f81a620346db34bb50048. Jul 7 06:15:57.877656 containerd[1726]: time="2025-07-07T06:15:57.877604221Z" level=info msg="StartContainer for \"c61110dcd3fda4821254bdff2defe2871c4794519a0f81a620346db34bb50048\" returns successfully" Jul 7 06:15:58.133001 containerd[1726]: time="2025-07-07T06:15:58.132465580Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:58.136674 containerd[1726]: time="2025-07-07T06:15:58.136650818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:15:58.137823 containerd[1726]: time="2025-07-07T06:15:58.137786482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 370.546624ms" Jul 7 06:15:58.137927 containerd[1726]: time="2025-07-07T06:15:58.137915195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:15:58.138917 containerd[1726]: time="2025-07-07T06:15:58.138898977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:15:58.140622 containerd[1726]: time="2025-07-07T06:15:58.140601965Z" level=info msg="CreateContainer within sandbox \"2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:15:58.181715 containerd[1726]: time="2025-07-07T06:15:58.180218108Z" level=info msg="Container 335d5e5c0686d0bb3d359d48348284a12dc439b20425c288a798cd3e08e5bc18: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:58.206842 containerd[1726]: time="2025-07-07T06:15:58.206816915Z" level=info msg="CreateContainer within sandbox \"2cdb38c212c1e1f16d0f4a53e5b22d010b671c66113bd89d842b5a543bac9daf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"335d5e5c0686d0bb3d359d48348284a12dc439b20425c288a798cd3e08e5bc18\"" Jul 7 06:15:58.207134 containerd[1726]: time="2025-07-07T06:15:58.207110595Z" level=info msg="StartContainer for \"335d5e5c0686d0bb3d359d48348284a12dc439b20425c288a798cd3e08e5bc18\"" Jul 7 06:15:58.208342 containerd[1726]: time="2025-07-07T06:15:58.208317415Z" level=info msg="connecting to shim 335d5e5c0686d0bb3d359d48348284a12dc439b20425c288a798cd3e08e5bc18" address="unix:///run/containerd/s/75afc48147c100f23889ae85257cf2320e737005391b1cc08ee50e9634baf422" protocol=ttrpc version=3 Jul 7 06:15:58.224841 systemd[1]: Started cri-containerd-335d5e5c0686d0bb3d359d48348284a12dc439b20425c288a798cd3e08e5bc18.scope - libcontainer container 335d5e5c0686d0bb3d359d48348284a12dc439b20425c288a798cd3e08e5bc18. Jul 7 06:15:58.268753 containerd[1726]: time="2025-07-07T06:15:58.268727421Z" level=info msg="StartContainer for \"335d5e5c0686d0bb3d359d48348284a12dc439b20425c288a798cd3e08e5bc18\" returns successfully" Jul 7 06:15:59.277621 kubelet[3158]: I0707 06:15:59.277469 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-564c97b774-4l6xr" podStartSLOduration=46.741710836 podStartE2EDuration="55.277453628s" podCreationTimestamp="2025-07-07 06:15:04 +0000 UTC" firstStartedPulling="2025-07-07 06:15:49.602871351 +0000 UTC m=+62.592110297" lastFinishedPulling="2025-07-07 06:15:58.138614135 +0000 UTC m=+71.127853089" observedRunningTime="2025-07-07 06:15:59.277439291 +0000 UTC m=+72.266678245" watchObservedRunningTime="2025-07-07 06:15:59.277453628 +0000 UTC m=+72.266692584" Jul 7 06:15:59.278110 kubelet[3158]: I0707 06:15:59.278019 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-564c97b774-7z2jl" podStartSLOduration=46.994748858 podStartE2EDuration="55.278000932s" podCreationTimestamp="2025-07-07 06:15:04 +0000 UTC" firstStartedPulling="2025-07-07 06:15:49.483097895 +0000 UTC m=+62.472336843" lastFinishedPulling="2025-07-07 06:15:57.76634996 +0000 UTC m=+70.755588917" observedRunningTime="2025-07-07 06:15:58.278158806 +0000 UTC m=+71.267397763" watchObservedRunningTime="2025-07-07 06:15:59.278000932 +0000 UTC m=+72.267239956" Jul 7 06:16:02.948247 containerd[1726]: time="2025-07-07T06:16:02.948198220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\" id:\"eb593179efe0e04f34024197d03847425a26df15086001adeffb96a293478801\" pid:5488 exited_at:{seconds:1751868962 nanos:947620947}" Jul 7 06:16:03.702301 containerd[1726]: time="2025-07-07T06:16:03.702245629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:03.764145 containerd[1726]: time="2025-07-07T06:16:03.764096840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 06:16:03.769224 containerd[1726]: time="2025-07-07T06:16:03.769197348Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:03.813717 containerd[1726]: time="2025-07-07T06:16:03.813651951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:03.814301 containerd[1726]: time="2025-07-07T06:16:03.814209083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.674973234s" Jul 7 06:16:03.814301 containerd[1726]: time="2025-07-07T06:16:03.814233391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 06:16:03.815294 containerd[1726]: time="2025-07-07T06:16:03.815274667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:16:03.828154 containerd[1726]: time="2025-07-07T06:16:03.828128186Z" level=info msg="CreateContainer within sandbox \"d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:16:04.053796 containerd[1726]: time="2025-07-07T06:16:04.053028499Z" level=info msg="Container ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:04.209032 containerd[1726]: time="2025-07-07T06:16:04.209012412Z" level=info msg="CreateContainer within sandbox \"d74ddf748368577315f7ac9d92cecd1480cc741d16f870e451d02d7a74c15924\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b\"" Jul 7 06:16:04.210170 containerd[1726]: time="2025-07-07T06:16:04.209408086Z" level=info msg="StartContainer for \"ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b\"" Jul 7 06:16:04.210641 containerd[1726]: time="2025-07-07T06:16:04.210580306Z" level=info msg="connecting to shim ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b" address="unix:///run/containerd/s/ca3484d94032fe7f1f9d1c32f91ad8bc8de6c34fa4d5c92ec3704fd5277eed7e" protocol=ttrpc version=3 Jul 7 06:16:04.229863 systemd[1]: Started cri-containerd-ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b.scope - libcontainer container ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b. Jul 7 06:16:04.308058 containerd[1726]: time="2025-07-07T06:16:04.307973084Z" level=info msg="StartContainer for \"ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b\" returns successfully" Jul 7 06:16:05.337636 kubelet[3158]: I0707 06:16:05.337586 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cbcdccb6b-29z7l" podStartSLOduration=43.904323264 podStartE2EDuration="56.337571603s" podCreationTimestamp="2025-07-07 06:15:09 +0000 UTC" firstStartedPulling="2025-07-07 06:15:51.381825742 +0000 UTC m=+64.371064699" lastFinishedPulling="2025-07-07 06:16:03.81507408 +0000 UTC m=+76.804313038" observedRunningTime="2025-07-07 06:16:05.337382845 +0000 UTC m=+78.326621801" watchObservedRunningTime="2025-07-07 06:16:05.337571603 +0000 UTC m=+78.326810556" Jul 7 06:16:05.420895 containerd[1726]: time="2025-07-07T06:16:05.420854191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b\" id:\"5e184a300bd4762b4ed191dd6c5219d4a402d23799dbffd033c7e455be6f7ca4\" pid:5563 exited_at:{seconds:1751868965 nanos:420290765}" Jul 7 06:16:07.105461 containerd[1726]: time="2025-07-07T06:16:07.105411393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:07.109500 containerd[1726]: time="2025-07-07T06:16:07.109442487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 06:16:07.154205 containerd[1726]: time="2025-07-07T06:16:07.154148733Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:07.208099 containerd[1726]: time="2025-07-07T06:16:07.208043518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:07.208646 containerd[1726]: time="2025-07-07T06:16:07.208559046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.393256447s" Jul 7 06:16:07.208646 containerd[1726]: time="2025-07-07T06:16:07.208585336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 06:16:07.210583 containerd[1726]: time="2025-07-07T06:16:07.210460041Z" level=info msg="CreateContainer within sandbox \"7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:16:07.362895 containerd[1726]: time="2025-07-07T06:16:07.362822836Z" level=info msg="Container 4ddf7b448e94865079b5311e95cfab57bd835752371d0e1a6626c2d3568ed175: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:07.463328 containerd[1726]: time="2025-07-07T06:16:07.463305262Z" level=info msg="CreateContainer within sandbox \"7b9aee1952e4539880cb3902d865eafcb31adf6f67fbad4bb4c34636c0fa8283\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4ddf7b448e94865079b5311e95cfab57bd835752371d0e1a6626c2d3568ed175\"" Jul 7 06:16:07.463828 containerd[1726]: time="2025-07-07T06:16:07.463610521Z" level=info msg="StartContainer for \"4ddf7b448e94865079b5311e95cfab57bd835752371d0e1a6626c2d3568ed175\"" Jul 7 06:16:07.464923 containerd[1726]: time="2025-07-07T06:16:07.464880816Z" level=info msg="connecting to shim 4ddf7b448e94865079b5311e95cfab57bd835752371d0e1a6626c2d3568ed175" address="unix:///run/containerd/s/aae0116d3b0816a2976d261e8bae2e257367ee09d079c0c08c7b39a92dd612bd" protocol=ttrpc version=3 Jul 7 06:16:07.485827 systemd[1]: Started cri-containerd-4ddf7b448e94865079b5311e95cfab57bd835752371d0e1a6626c2d3568ed175.scope - libcontainer container 4ddf7b448e94865079b5311e95cfab57bd835752371d0e1a6626c2d3568ed175. Jul 7 06:16:07.515969 containerd[1726]: time="2025-07-07T06:16:07.515953942Z" level=info msg="StartContainer for \"4ddf7b448e94865079b5311e95cfab57bd835752371d0e1a6626c2d3568ed175\" returns successfully" Jul 7 06:16:08.170838 kubelet[3158]: I0707 06:16:08.170814 3158 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:16:08.171198 kubelet[3158]: I0707 06:16:08.170844 3158 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:16:08.333435 kubelet[3158]: I0707 06:16:08.333390 3158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xpqzl" podStartSLOduration=41.541202833 podStartE2EDuration="59.33336564s" podCreationTimestamp="2025-07-07 06:15:09 +0000 UTC" firstStartedPulling="2025-07-07 06:15:49.417134592 +0000 UTC m=+62.406373546" lastFinishedPulling="2025-07-07 06:16:07.209297399 +0000 UTC m=+80.198536353" observedRunningTime="2025-07-07 06:16:08.331920738 +0000 UTC m=+81.321159693" watchObservedRunningTime="2025-07-07 06:16:08.33336564 +0000 UTC m=+81.322604596" Jul 7 06:16:09.687635 containerd[1726]: time="2025-07-07T06:16:09.687599384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df\" id:\"7d0d5a20c0874d446e7da178d3a7c842a05c387401161c82ab93ab89dde499a2\" pid:5623 exited_at:{seconds:1751868969 nanos:687387465}" Jul 7 06:16:32.924038 containerd[1726]: time="2025-07-07T06:16:32.923832530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b\" id:\"a0223e05fdf1f87247db3b97475d81f8822238543f252cccef1b3ff7db76e8ca\" pid:5662 exited_at:{seconds:1751868992 nanos:923649611}" Jul 7 06:16:32.952055 containerd[1726]: time="2025-07-07T06:16:32.952012204Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\" id:\"9bf590d6337769e151f108e91eec09070ef399218174d9d922811e15942ca9ea\" pid:5679 exited_at:{seconds:1751868992 nanos:951860791}" Jul 7 06:16:39.690545 containerd[1726]: time="2025-07-07T06:16:39.690395735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df\" id:\"2ff61e699dd5e72f36ecdd43549834937a7ab6cfd53c51473ffe758450d1d5b4\" pid:5704 exited_at:{seconds:1751868999 nanos:689752099}" Jul 7 06:16:44.754715 systemd[1]: Started sshd@7-10.200.4.33:22-10.200.16.10:33010.service - OpenSSH per-connection server daemon (10.200.16.10:33010). Jul 7 06:16:45.355794 sshd[5721]: Accepted publickey for core from 10.200.16.10 port 33010 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:45.359606 sshd-session[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:45.363607 systemd-logind[1703]: New session 10 of user core. Jul 7 06:16:45.370363 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:16:45.848944 sshd[5723]: Connection closed by 10.200.16.10 port 33010 Jul 7 06:16:45.849509 sshd-session[5721]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:45.853838 systemd[1]: sshd@7-10.200.4.33:22-10.200.16.10:33010.service: Deactivated successfully. Jul 7 06:16:45.856171 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:16:45.857270 systemd-logind[1703]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:16:45.858084 systemd-logind[1703]: Removed session 10. Jul 7 06:16:48.845639 containerd[1726]: time="2025-07-07T06:16:48.845594852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b\" id:\"9e9d00180294e9b220cec43d88e86cb4102b228bd77c50de4d63bf9c4ce167cf\" pid:5749 exited_at:{seconds:1751869008 nanos:845205919}" Jul 7 06:16:50.955518 systemd[1]: Started sshd@8-10.200.4.33:22-10.200.16.10:55434.service - OpenSSH per-connection server daemon (10.200.16.10:55434). Jul 7 06:16:51.551730 sshd[5760]: Accepted publickey for core from 10.200.16.10 port 55434 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:51.551896 sshd-session[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:51.556820 systemd-logind[1703]: New session 11 of user core. Jul 7 06:16:51.563868 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:16:52.028158 sshd[5762]: Connection closed by 10.200.16.10 port 55434 Jul 7 06:16:52.028628 sshd-session[5760]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:52.031297 systemd[1]: sshd@8-10.200.4.33:22-10.200.16.10:55434.service: Deactivated successfully. Jul 7 06:16:52.032971 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:16:52.033645 systemd-logind[1703]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:16:52.034831 systemd-logind[1703]: Removed session 11. Jul 7 06:16:54.142943 update_engine[1704]: I20250707 06:16:54.142889 1704 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 06:16:54.142943 update_engine[1704]: I20250707 06:16:54.142939 1704 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 06:16:54.143524 update_engine[1704]: I20250707 06:16:54.143108 1704 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 06:16:54.143728 update_engine[1704]: I20250707 06:16:54.143604 1704 omaha_request_params.cc:62] Current group set to alpha Jul 7 06:16:54.144131 update_engine[1704]: I20250707 06:16:54.143874 1704 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 06:16:54.144131 update_engine[1704]: I20250707 06:16:54.143889 1704 update_attempter.cc:643] Scheduling an action processor start. Jul 7 06:16:54.144131 update_engine[1704]: I20250707 06:16:54.143910 1704 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 06:16:54.144131 update_engine[1704]: I20250707 06:16:54.143954 1704 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 06:16:54.144131 update_engine[1704]: I20250707 06:16:54.144022 1704 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 06:16:54.144131 update_engine[1704]: I20250707 06:16:54.144030 1704 omaha_request_action.cc:272] Request: Jul 7 06:16:54.144131 update_engine[1704]: Jul 7 06:16:54.144131 update_engine[1704]: Jul 7 06:16:54.144131 update_engine[1704]: Jul 7 06:16:54.144131 update_engine[1704]: Jul 7 06:16:54.144131 update_engine[1704]: Jul 7 06:16:54.144131 update_engine[1704]: Jul 7 06:16:54.144131 update_engine[1704]: Jul 7 06:16:54.144131 update_engine[1704]: Jul 7 06:16:54.144131 update_engine[1704]: I20250707 06:16:54.144037 1704 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:16:54.144533 locksmithd[1790]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 06:16:54.145395 update_engine[1704]: I20250707 06:16:54.145370 1704 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:16:54.145749 update_engine[1704]: I20250707 06:16:54.145730 1704 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:16:54.170151 update_engine[1704]: E20250707 06:16:54.170120 1704 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:16:54.170233 update_engine[1704]: I20250707 06:16:54.170192 1704 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 06:16:54.484106 containerd[1726]: time="2025-07-07T06:16:54.484069970Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\" id:\"d0fe4b8db7307d7b2d93affbcded380da93cdb918f05e956c6aeff6e90a79e1d\" pid:5788 exited_at:{seconds:1751869014 nanos:483837945}" Jul 7 06:16:57.143971 systemd[1]: Started sshd@9-10.200.4.33:22-10.200.16.10:55436.service - OpenSSH per-connection server daemon (10.200.16.10:55436). Jul 7 06:16:57.735871 sshd[5800]: Accepted publickey for core from 10.200.16.10 port 55436 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:57.736906 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:57.740188 systemd-logind[1703]: New session 12 of user core. Jul 7 06:16:57.745863 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:16:58.203539 sshd[5802]: Connection closed by 10.200.16.10 port 55436 Jul 7 06:16:58.204161 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:58.206932 systemd[1]: sshd@9-10.200.4.33:22-10.200.16.10:55436.service: Deactivated successfully. Jul 7 06:16:58.208594 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:16:58.209369 systemd-logind[1703]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:16:58.210446 systemd-logind[1703]: Removed session 12. Jul 7 06:16:58.310214 systemd[1]: Started sshd@10-10.200.4.33:22-10.200.16.10:55444.service - OpenSSH per-connection server daemon (10.200.16.10:55444). Jul 7 06:16:58.903742 sshd[5815]: Accepted publickey for core from 10.200.16.10 port 55444 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:58.904669 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:58.908526 systemd-logind[1703]: New session 13 of user core. Jul 7 06:16:58.913843 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:16:59.386560 sshd[5817]: Connection closed by 10.200.16.10 port 55444 Jul 7 06:16:59.386951 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:59.389393 systemd[1]: sshd@10-10.200.4.33:22-10.200.16.10:55444.service: Deactivated successfully. Jul 7 06:16:59.390952 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:16:59.391632 systemd-logind[1703]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:16:59.392680 systemd-logind[1703]: Removed session 13. Jul 7 06:16:59.493055 systemd[1]: Started sshd@11-10.200.4.33:22-10.200.16.10:55454.service - OpenSSH per-connection server daemon (10.200.16.10:55454). Jul 7 06:17:00.088608 sshd[5826]: Accepted publickey for core from 10.200.16.10 port 55454 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:00.089519 sshd-session[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:00.093179 systemd-logind[1703]: New session 14 of user core. Jul 7 06:17:00.099847 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:17:00.562391 sshd[5828]: Connection closed by 10.200.16.10 port 55454 Jul 7 06:17:00.562779 sshd-session[5826]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:00.565202 systemd[1]: sshd@11-10.200.4.33:22-10.200.16.10:55454.service: Deactivated successfully. Jul 7 06:17:00.566819 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:17:00.567525 systemd-logind[1703]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:17:00.568606 systemd-logind[1703]: Removed session 14. Jul 7 06:17:02.920883 containerd[1726]: time="2025-07-07T06:17:02.920835018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b\" id:\"8ba580c36fe6d2c084eda7a670c7d0f334577f23b7e08a66f9e04699b4d9fee5\" pid:5855 exited_at:{seconds:1751869022 nanos:920304073}" Jul 7 06:17:02.948537 containerd[1726]: time="2025-07-07T06:17:02.948485958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\" id:\"32c74f0470fc559c9896c258d8765d71b6928fc7aee324261d476c11e240d885\" pid:5873 exited_at:{seconds:1751869022 nanos:948237368}" Jul 7 06:17:04.143373 update_engine[1704]: I20250707 06:17:04.143316 1704 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:17:04.143817 update_engine[1704]: I20250707 06:17:04.143564 1704 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:17:04.143914 update_engine[1704]: I20250707 06:17:04.143877 1704 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:17:04.155647 update_engine[1704]: E20250707 06:17:04.155620 1704 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:17:04.155729 update_engine[1704]: I20250707 06:17:04.155670 1704 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 06:17:05.672383 systemd[1]: Started sshd@12-10.200.4.33:22-10.200.16.10:35922.service - OpenSSH per-connection server daemon (10.200.16.10:35922). Jul 7 06:17:06.259080 sshd[5892]: Accepted publickey for core from 10.200.16.10 port 35922 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:06.260123 sshd-session[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:06.263804 systemd-logind[1703]: New session 15 of user core. Jul 7 06:17:06.269841 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:17:06.726996 sshd[5894]: Connection closed by 10.200.16.10 port 35922 Jul 7 06:17:06.727409 sshd-session[5892]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:06.730265 systemd[1]: sshd@12-10.200.4.33:22-10.200.16.10:35922.service: Deactivated successfully. Jul 7 06:17:06.731853 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:17:06.732496 systemd-logind[1703]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:17:06.733506 systemd-logind[1703]: Removed session 15. Jul 7 06:17:09.684406 containerd[1726]: time="2025-07-07T06:17:09.684357501Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df\" id:\"3ce2a9d1a432e27dc978340bd2f0cb4cb2f8100a75f1e77be4a046039b97d296\" pid:5918 exited_at:{seconds:1751869029 nanos:684040876}" Jul 7 06:17:11.832773 systemd[1]: Started sshd@13-10.200.4.33:22-10.200.16.10:48558.service - OpenSSH per-connection server daemon (10.200.16.10:48558). Jul 7 06:17:12.427048 sshd[5934]: Accepted publickey for core from 10.200.16.10 port 48558 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:12.428031 sshd-session[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:12.431996 systemd-logind[1703]: New session 16 of user core. Jul 7 06:17:12.434889 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:17:12.889046 sshd[5936]: Connection closed by 10.200.16.10 port 48558 Jul 7 06:17:12.889504 sshd-session[5934]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:12.891559 systemd[1]: sshd@13-10.200.4.33:22-10.200.16.10:48558.service: Deactivated successfully. Jul 7 06:17:12.893115 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:17:12.894684 systemd-logind[1703]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:17:12.895505 systemd-logind[1703]: Removed session 16. Jul 7 06:17:14.142033 update_engine[1704]: I20250707 06:17:14.141978 1704 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:17:14.142347 update_engine[1704]: I20250707 06:17:14.142199 1704 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:17:14.142458 update_engine[1704]: I20250707 06:17:14.142437 1704 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:17:14.232414 update_engine[1704]: E20250707 06:17:14.232380 1704 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:17:14.232517 update_engine[1704]: I20250707 06:17:14.232439 1704 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 06:17:17.995480 systemd[1]: Started sshd@14-10.200.4.33:22-10.200.16.10:48562.service - OpenSSH per-connection server daemon (10.200.16.10:48562). Jul 7 06:17:18.590420 sshd[5955]: Accepted publickey for core from 10.200.16.10 port 48562 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:18.591375 sshd-session[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:18.595223 systemd-logind[1703]: New session 17 of user core. Jul 7 06:17:18.599830 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:17:19.080768 sshd[5957]: Connection closed by 10.200.16.10 port 48562 Jul 7 06:17:19.081198 sshd-session[5955]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:19.085338 systemd[1]: sshd@14-10.200.4.33:22-10.200.16.10:48562.service: Deactivated successfully. Jul 7 06:17:19.087964 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:17:19.090282 systemd-logind[1703]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:17:19.093081 systemd-logind[1703]: Removed session 17. Jul 7 06:17:19.185439 systemd[1]: Started sshd@15-10.200.4.33:22-10.200.16.10:48568.service - OpenSSH per-connection server daemon (10.200.16.10:48568). Jul 7 06:17:19.783399 sshd[5969]: Accepted publickey for core from 10.200.16.10 port 48568 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:19.784350 sshd-session[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:19.787748 systemd-logind[1703]: New session 18 of user core. Jul 7 06:17:19.792859 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:17:20.300748 sshd[5971]: Connection closed by 10.200.16.10 port 48568 Jul 7 06:17:20.301146 sshd-session[5969]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:20.303269 systemd[1]: sshd@15-10.200.4.33:22-10.200.16.10:48568.service: Deactivated successfully. Jul 7 06:17:20.304844 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:17:20.306065 systemd-logind[1703]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:17:20.307175 systemd-logind[1703]: Removed session 18. Jul 7 06:17:20.411902 systemd[1]: Started sshd@16-10.200.4.33:22-10.200.16.10:43144.service - OpenSSH per-connection server daemon (10.200.16.10:43144). Jul 7 06:17:21.011480 sshd[5981]: Accepted publickey for core from 10.200.16.10 port 43144 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:21.012545 sshd-session[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:21.017166 systemd-logind[1703]: New session 19 of user core. Jul 7 06:17:21.020855 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:17:23.547418 sshd[5983]: Connection closed by 10.200.16.10 port 43144 Jul 7 06:17:23.547908 sshd-session[5981]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:23.551079 systemd-logind[1703]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:17:23.553234 systemd[1]: sshd@16-10.200.4.33:22-10.200.16.10:43144.service: Deactivated successfully. Jul 7 06:17:23.555198 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:17:23.556235 systemd[1]: session-19.scope: Consumed 448ms CPU time, 78.7M memory peak. Jul 7 06:17:23.558390 systemd-logind[1703]: Removed session 19. Jul 7 06:17:23.657284 systemd[1]: Started sshd@17-10.200.4.33:22-10.200.16.10:43158.service - OpenSSH per-connection server daemon (10.200.16.10:43158). Jul 7 06:17:24.140794 update_engine[1704]: I20250707 06:17:24.140739 1704 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:17:24.141167 update_engine[1704]: I20250707 06:17:24.140954 1704 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:17:24.141217 update_engine[1704]: I20250707 06:17:24.141191 1704 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:17:24.172047 update_engine[1704]: E20250707 06:17:24.172013 1704 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:17:24.172141 update_engine[1704]: I20250707 06:17:24.172071 1704 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 06:17:24.172141 update_engine[1704]: I20250707 06:17:24.172079 1704 omaha_request_action.cc:617] Omaha request response: Jul 7 06:17:24.172186 update_engine[1704]: E20250707 06:17:24.172148 1704 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 06:17:24.172186 update_engine[1704]: I20250707 06:17:24.172164 1704 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 06:17:24.172186 update_engine[1704]: I20250707 06:17:24.172169 1704 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 06:17:24.172186 update_engine[1704]: I20250707 06:17:24.172174 1704 update_attempter.cc:306] Processing Done. Jul 7 06:17:24.172269 update_engine[1704]: E20250707 06:17:24.172190 1704 update_attempter.cc:619] Update failed. Jul 7 06:17:24.172269 update_engine[1704]: I20250707 06:17:24.172194 1704 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 06:17:24.172269 update_engine[1704]: I20250707 06:17:24.172200 1704 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 06:17:24.172269 update_engine[1704]: I20250707 06:17:24.172205 1704 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 06:17:24.172346 update_engine[1704]: I20250707 06:17:24.172276 1704 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 06:17:24.172346 update_engine[1704]: I20250707 06:17:24.172297 1704 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 06:17:24.172346 update_engine[1704]: I20250707 06:17:24.172302 1704 omaha_request_action.cc:272] Request: Jul 7 06:17:24.172346 update_engine[1704]: Jul 7 06:17:24.172346 update_engine[1704]: Jul 7 06:17:24.172346 update_engine[1704]: Jul 7 06:17:24.172346 update_engine[1704]: Jul 7 06:17:24.172346 update_engine[1704]: Jul 7 06:17:24.172346 update_engine[1704]: Jul 7 06:17:24.172346 update_engine[1704]: I20250707 06:17:24.172308 1704 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:17:24.172535 update_engine[1704]: I20250707 06:17:24.172428 1704 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:17:24.172658 update_engine[1704]: I20250707 06:17:24.172632 1704 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:17:24.172886 locksmithd[1790]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 06:17:24.181672 update_engine[1704]: E20250707 06:17:24.181493 1704 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:17:24.181672 update_engine[1704]: I20250707 06:17:24.181550 1704 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 06:17:24.181672 update_engine[1704]: I20250707 06:17:24.181557 1704 omaha_request_action.cc:617] Omaha request response: Jul 7 06:17:24.181672 update_engine[1704]: I20250707 06:17:24.181564 1704 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 06:17:24.181672 update_engine[1704]: I20250707 06:17:24.181568 1704 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 06:17:24.181672 update_engine[1704]: I20250707 06:17:24.181573 1704 update_attempter.cc:306] Processing Done. Jul 7 06:17:24.181672 update_engine[1704]: I20250707 06:17:24.181581 1704 update_attempter.cc:310] Error event sent. Jul 7 06:17:24.181672 update_engine[1704]: I20250707 06:17:24.181589 1704 update_check_scheduler.cc:74] Next update check in 42m50s Jul 7 06:17:24.182037 locksmithd[1790]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 06:17:24.256908 sshd[6016]: Accepted publickey for core from 10.200.16.10 port 43158 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:24.257866 sshd-session[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:24.261880 systemd-logind[1703]: New session 20 of user core. Jul 7 06:17:24.267854 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:17:24.817780 sshd[6018]: Connection closed by 10.200.16.10 port 43158 Jul 7 06:17:24.819097 sshd-session[6016]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:24.821380 systemd[1]: sshd@17-10.200.4.33:22-10.200.16.10:43158.service: Deactivated successfully. Jul 7 06:17:24.825413 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:17:24.827817 systemd-logind[1703]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:17:24.830912 systemd-logind[1703]: Removed session 20. Jul 7 06:17:24.929032 systemd[1]: Started sshd@18-10.200.4.33:22-10.200.16.10:43174.service - OpenSSH per-connection server daemon (10.200.16.10:43174). Jul 7 06:17:25.526800 sshd[6027]: Accepted publickey for core from 10.200.16.10 port 43174 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:25.527954 sshd-session[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:25.532880 systemd-logind[1703]: New session 21 of user core. Jul 7 06:17:25.537903 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:17:25.986779 sshd[6029]: Connection closed by 10.200.16.10 port 43174 Jul 7 06:17:25.987174 sshd-session[6027]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:25.989238 systemd[1]: sshd@18-10.200.4.33:22-10.200.16.10:43174.service: Deactivated successfully. Jul 7 06:17:25.990938 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:17:25.992132 systemd-logind[1703]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:17:25.993137 systemd-logind[1703]: Removed session 21. Jul 7 06:17:31.093742 systemd[1]: Started sshd@19-10.200.4.33:22-10.200.16.10:33032.service - OpenSSH per-connection server daemon (10.200.16.10:33032). Jul 7 06:17:31.694401 sshd[6041]: Accepted publickey for core from 10.200.16.10 port 33032 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:31.695360 sshd-session[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:31.699167 systemd-logind[1703]: New session 22 of user core. Jul 7 06:17:31.705834 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:17:32.157320 sshd[6043]: Connection closed by 10.200.16.10 port 33032 Jul 7 06:17:32.157723 sshd-session[6041]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:32.160730 systemd[1]: sshd@19-10.200.4.33:22-10.200.16.10:33032.service: Deactivated successfully. Jul 7 06:17:32.162174 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:17:32.162903 systemd-logind[1703]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:17:32.163816 systemd-logind[1703]: Removed session 22. Jul 7 06:17:32.922097 containerd[1726]: time="2025-07-07T06:17:32.921682369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b\" id:\"b6123d3c211d60568bc29b7ba6aabfda693e68ad5f54b40a5b06c90a57d840dc\" pid:6069 exited_at:{seconds:1751869052 nanos:920405149}" Jul 7 06:17:32.947307 containerd[1726]: time="2025-07-07T06:17:32.947276629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\" id:\"14885699a032ac28fab5b54377afacd75ef45dd2eb440f2b8f921ad13c254658\" pid:6087 exited_at:{seconds:1751869052 nanos:947099541}" Jul 7 06:17:37.266547 systemd[1]: Started sshd@20-10.200.4.33:22-10.200.16.10:33034.service - OpenSSH per-connection server daemon (10.200.16.10:33034). Jul 7 06:17:37.861128 sshd[6102]: Accepted publickey for core from 10.200.16.10 port 33034 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:37.862101 sshd-session[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:37.865938 systemd-logind[1703]: New session 23 of user core. Jul 7 06:17:37.870857 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:17:38.327373 sshd[6104]: Connection closed by 10.200.16.10 port 33034 Jul 7 06:17:38.327825 sshd-session[6102]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:38.330289 systemd[1]: sshd@20-10.200.4.33:22-10.200.16.10:33034.service: Deactivated successfully. Jul 7 06:17:38.331868 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:17:38.332565 systemd-logind[1703]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:17:38.333520 systemd-logind[1703]: Removed session 23. Jul 7 06:17:39.689285 containerd[1726]: time="2025-07-07T06:17:39.689246555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df\" id:\"dd82ca888556840779e39ee999c072c28400bc337b47220102b5355c6c532030\" pid:6127 exited_at:{seconds:1751869059 nanos:689052309}" Jul 7 06:17:43.442622 systemd[1]: Started sshd@21-10.200.4.33:22-10.200.16.10:43420.service - OpenSSH per-connection server daemon (10.200.16.10:43420). Jul 7 06:17:44.041097 sshd[6141]: Accepted publickey for core from 10.200.16.10 port 43420 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:44.042209 sshd-session[6141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:44.046127 systemd-logind[1703]: New session 24 of user core. Jul 7 06:17:44.047834 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:17:44.508423 sshd[6143]: Connection closed by 10.200.16.10 port 43420 Jul 7 06:17:44.508858 sshd-session[6141]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:44.511376 systemd[1]: sshd@21-10.200.4.33:22-10.200.16.10:43420.service: Deactivated successfully. Jul 7 06:17:44.512916 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:17:44.513581 systemd-logind[1703]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:17:44.514574 systemd-logind[1703]: Removed session 24. Jul 7 06:17:48.831448 containerd[1726]: time="2025-07-07T06:17:48.831397372Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b\" id:\"76d05e73442e8dbead501b0c58a91a7777e928421b07251f41f4c4706e4d0cae\" pid:6169 exited_at:{seconds:1751869068 nanos:831167608}" Jul 7 06:17:49.616965 systemd[1]: Started sshd@22-10.200.4.33:22-10.200.16.10:43422.service - OpenSSH per-connection server daemon (10.200.16.10:43422). Jul 7 06:17:50.225937 sshd[6179]: Accepted publickey for core from 10.200.16.10 port 43422 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:50.227954 sshd-session[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:50.233757 systemd-logind[1703]: New session 25 of user core. Jul 7 06:17:50.239885 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:17:50.707400 sshd[6181]: Connection closed by 10.200.16.10 port 43422 Jul 7 06:17:50.707933 sshd-session[6179]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:50.710112 systemd[1]: sshd@22-10.200.4.33:22-10.200.16.10:43422.service: Deactivated successfully. Jul 7 06:17:50.711804 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:17:50.713341 systemd-logind[1703]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:17:50.714445 systemd-logind[1703]: Removed session 25. Jul 7 06:17:54.483517 containerd[1726]: time="2025-07-07T06:17:54.483419057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\" id:\"b365c77b60d770091f9ca7f9a320ee62465f4fd756ea35b43cd0483b3bccee0b\" pid:6206 exited_at:{seconds:1751869074 nanos:483196592}" Jul 7 06:17:55.817574 systemd[1]: Started sshd@23-10.200.4.33:22-10.200.16.10:41500.service - OpenSSH per-connection server daemon (10.200.16.10:41500). Jul 7 06:17:56.410885 sshd[6218]: Accepted publickey for core from 10.200.16.10 port 41500 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:56.412138 sshd-session[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:56.416294 systemd-logind[1703]: New session 26 of user core. Jul 7 06:17:56.419848 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 06:17:56.875630 sshd[6220]: Connection closed by 10.200.16.10 port 41500 Jul 7 06:17:56.876079 sshd-session[6218]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:56.878158 systemd[1]: sshd@23-10.200.4.33:22-10.200.16.10:41500.service: Deactivated successfully. Jul 7 06:17:56.879846 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 06:17:56.881396 systemd-logind[1703]: Session 26 logged out. Waiting for processes to exit. Jul 7 06:17:56.882256 systemd-logind[1703]: Removed session 26. Jul 7 06:18:01.982639 systemd[1]: Started sshd@24-10.200.4.33:22-10.200.16.10:54120.service - OpenSSH per-connection server daemon (10.200.16.10:54120). Jul 7 06:18:02.582785 sshd[6236]: Accepted publickey for core from 10.200.16.10 port 54120 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:18:02.583768 sshd-session[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:02.587640 systemd-logind[1703]: New session 27 of user core. Jul 7 06:18:02.591823 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 06:18:02.925399 containerd[1726]: time="2025-07-07T06:18:02.925202795Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b\" id:\"980dc47eb184e33f03ed410f91d4ec0fef4904c7724fba0675c394a9c7eb5bf5\" pid:6253 exited_at:{seconds:1751869082 nanos:924977184}" Jul 7 06:18:02.978675 containerd[1726]: time="2025-07-07T06:18:02.978607096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\" id:\"ef111759411718828f925c4240e6083603c61fe87538b8932dbbcfd0c3aba19b\" pid:6270 exited_at:{seconds:1751869082 nanos:976983310}" Jul 7 06:18:03.078691 sshd[6239]: Connection closed by 10.200.16.10 port 54120 Jul 7 06:18:03.079077 sshd-session[6236]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:03.082719 systemd[1]: sshd@24-10.200.4.33:22-10.200.16.10:54120.service: Deactivated successfully. Jul 7 06:18:03.084211 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 06:18:03.084943 systemd-logind[1703]: Session 27 logged out. Waiting for processes to exit. Jul 7 06:18:03.085901 systemd-logind[1703]: Removed session 27. Jul 7 06:18:08.184479 systemd[1]: Started sshd@25-10.200.4.33:22-10.200.16.10:54128.service - OpenSSH per-connection server daemon (10.200.16.10:54128). Jul 7 06:18:08.778480 sshd[6294]: Accepted publickey for core from 10.200.16.10 port 54128 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:18:08.779434 sshd-session[6294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:08.783329 systemd-logind[1703]: New session 28 of user core. Jul 7 06:18:08.789850 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 06:18:09.240288 sshd[6297]: Connection closed by 10.200.16.10 port 54128 Jul 7 06:18:09.240666 sshd-session[6294]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:09.243125 systemd[1]: sshd@25-10.200.4.33:22-10.200.16.10:54128.service: Deactivated successfully. Jul 7 06:18:09.244723 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 06:18:09.245351 systemd-logind[1703]: Session 28 logged out. Waiting for processes to exit. Jul 7 06:18:09.246389 systemd-logind[1703]: Removed session 28. Jul 7 06:18:09.682911 containerd[1726]: time="2025-07-07T06:18:09.682832173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80fab047f314b5264709e3ac04ce7d5622880ee08b2caa82501a141979a853df\" id:\"e06d7081de8e74dcfed60c5e0a54e57703cd3001464e07db07b87decbd9d45a9\" pid:6320 exited_at:{seconds:1751869089 nanos:682561091}" Jul 7 06:18:14.353629 systemd[1]: Started sshd@26-10.200.4.33:22-10.200.16.10:41552.service - OpenSSH per-connection server daemon (10.200.16.10:41552). Jul 7 06:18:14.951489 sshd[6333]: Accepted publickey for core from 10.200.16.10 port 41552 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:18:14.952497 sshd-session[6333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:14.956216 systemd-logind[1703]: New session 29 of user core. Jul 7 06:18:14.957864 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 7 06:18:15.433165 sshd[6335]: Connection closed by 10.200.16.10 port 41552 Jul 7 06:18:15.433619 sshd-session[6333]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:15.435747 systemd[1]: sshd@26-10.200.4.33:22-10.200.16.10:41552.service: Deactivated successfully. Jul 7 06:18:15.438061 systemd[1]: session-29.scope: Deactivated successfully. Jul 7 06:18:15.438662 systemd-logind[1703]: Session 29 logged out. Waiting for processes to exit. Jul 7 06:18:15.439389 systemd-logind[1703]: Removed session 29. Jul 7 06:18:20.540655 systemd[1]: Started sshd@27-10.200.4.33:22-10.200.16.10:36842.service - OpenSSH per-connection server daemon (10.200.16.10:36842). Jul 7 06:18:21.132819 sshd[6347]: Accepted publickey for core from 10.200.16.10 port 36842 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:18:21.133794 sshd-session[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:21.137675 systemd-logind[1703]: New session 30 of user core. Jul 7 06:18:21.142867 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 7 06:18:21.598606 sshd[6349]: Connection closed by 10.200.16.10 port 36842 Jul 7 06:18:21.599025 sshd-session[6347]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:21.601773 systemd[1]: sshd@27-10.200.4.33:22-10.200.16.10:36842.service: Deactivated successfully. Jul 7 06:18:21.603320 systemd[1]: session-30.scope: Deactivated successfully. Jul 7 06:18:21.604111 systemd-logind[1703]: Session 30 logged out. Waiting for processes to exit. Jul 7 06:18:21.605201 systemd-logind[1703]: Removed session 30. Jul 7 06:18:26.711631 systemd[1]: Started sshd@28-10.200.4.33:22-10.200.16.10:36852.service - OpenSSH per-connection server daemon (10.200.16.10:36852). Jul 7 06:18:27.308550 sshd[6370]: Accepted publickey for core from 10.200.16.10 port 36852 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:18:27.309692 sshd-session[6370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:27.313553 systemd-logind[1703]: New session 31 of user core. Jul 7 06:18:27.317843 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 7 06:18:27.792974 sshd[6372]: Connection closed by 10.200.16.10 port 36852 Jul 7 06:18:27.793372 sshd-session[6370]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:27.795990 systemd[1]: sshd@28-10.200.4.33:22-10.200.16.10:36852.service: Deactivated successfully. Jul 7 06:18:27.797576 systemd[1]: session-31.scope: Deactivated successfully. Jul 7 06:18:27.798247 systemd-logind[1703]: Session 31 logged out. Waiting for processes to exit. Jul 7 06:18:27.799392 systemd-logind[1703]: Removed session 31. Jul 7 06:18:32.918542 containerd[1726]: time="2025-07-07T06:18:32.918503456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce08eebf13b906179e2373429f6d83069de8418a4c61108a1d5b38e821d3043b\" id:\"a4b918ba0beb5e84eb00661d4824282eabe4b9b157f175cda3de9909e9df591c\" pid:6399 exited_at:{seconds:1751869112 nanos:918152660}" Jul 7 06:18:32.948865 containerd[1726]: time="2025-07-07T06:18:32.948832146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bf4e349450bd58d2624dc8a82a36a7b2c7d04a4eadba34027590800f335afc\" id:\"60a24777915710453cb0258b3d08b3e46325b13b24db5ebfb48c3fad42075ce1\" pid:6415 exited_at:{seconds:1751869112 nanos:948641772}"