Jul 1 08:41:21.958095 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jun 30 19:26:54 -00 2025 Jul 1 08:41:21.958120 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:41:21.958131 kernel: BIOS-provided physical RAM map: Jul 1 08:41:21.958138 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 1 08:41:21.958144 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 1 08:41:21.958150 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jul 1 08:41:21.958157 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jul 1 08:41:21.958165 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jul 1 08:41:21.958170 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jul 1 08:41:21.958176 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 1 08:41:21.958182 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 1 08:41:21.958188 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 1 08:41:21.958194 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 1 08:41:21.958200 kernel: printk: legacy bootconsole [earlyser0] enabled Jul 1 08:41:21.958209 kernel: NX (Execute Disable) protection: active Jul 1 08:41:21.958216 kernel: APIC: Static calls initialized Jul 1 08:41:21.958222 kernel: efi: EFI v2.7 by Microsoft Jul 1 08:41:21.958228 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Jul 1 08:41:21.958235 kernel: random: crng init done Jul 1 08:41:21.958241 kernel: secureboot: Secure boot disabled Jul 1 08:41:21.958248 kernel: SMBIOS 3.1.0 present. Jul 1 08:41:21.958254 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jul 1 08:41:21.958261 kernel: DMI: Memory slots populated: 2/2 Jul 1 08:41:21.958268 kernel: Hypervisor detected: Microsoft Hyper-V Jul 1 08:41:21.958274 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jul 1 08:41:21.958281 kernel: Hyper-V: Nested features: 0x3e0101 Jul 1 08:41:21.958287 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 1 08:41:21.958293 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 1 08:41:21.958300 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 1 08:41:21.958306 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 1 08:41:21.958313 kernel: tsc: Detected 2300.000 MHz processor Jul 1 08:41:21.958319 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 1 08:41:21.958327 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 1 08:41:21.958335 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jul 1 08:41:21.958342 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 1 08:41:21.958349 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 1 08:41:21.958356 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jul 1 08:41:21.958362 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jul 1 08:41:21.958369 kernel: Using GB pages for direct mapping Jul 1 08:41:21.958376 kernel: ACPI: Early table checksum verification disabled Jul 1 08:41:21.958385 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 1 08:41:21.958393 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 1 08:41:21.958400 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 1 08:41:21.958407 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 1 08:41:21.958414 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 1 08:41:21.958421 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 1 08:41:21.958428 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 1 08:41:21.958437 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 1 08:41:21.958444 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 1 08:41:21.958451 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 1 08:41:21.958458 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 1 08:41:21.958465 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 1 08:41:21.958472 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jul 1 08:41:21.958479 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 1 08:41:21.958486 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 1 08:41:21.958493 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 1 08:41:21.958501 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 1 08:41:21.958508 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jul 1 08:41:21.958515 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jul 1 08:41:21.958522 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 1 08:41:21.958529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 1 08:41:21.958536 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jul 1 08:41:21.958543 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jul 1 08:41:21.958550 kernel: NODE_DATA(0) allocated [mem 0x2bfff6dc0-0x2bfffdfff] Jul 1 08:41:21.958557 kernel: Zone ranges: Jul 1 08:41:21.958565 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 1 08:41:21.958572 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 1 08:41:21.958579 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 1 08:41:21.958586 kernel: Device empty Jul 1 08:41:21.958593 kernel: Movable zone start for each node Jul 1 08:41:21.958600 kernel: Early memory node ranges Jul 1 08:41:21.958607 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 1 08:41:21.958614 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jul 1 08:41:21.958621 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jul 1 08:41:21.958629 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 1 08:41:21.958636 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 1 08:41:21.958644 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 1 08:41:21.958650 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 1 08:41:21.958657 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 1 08:41:21.958664 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 1 08:41:21.958670 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jul 1 08:41:21.958677 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 1 08:41:21.958684 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 1 08:41:21.958692 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 1 08:41:21.958700 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 1 08:41:21.958707 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 1 08:41:21.958714 kernel: TSC deadline timer available Jul 1 08:41:21.958721 kernel: CPU topo: Max. logical packages: 1 Jul 1 08:41:21.958727 kernel: CPU topo: Max. logical dies: 1 Jul 1 08:41:21.958734 kernel: CPU topo: Max. dies per package: 1 Jul 1 08:41:21.958741 kernel: CPU topo: Max. threads per core: 2 Jul 1 08:41:21.958748 kernel: CPU topo: Num. cores per package: 1 Jul 1 08:41:21.958757 kernel: CPU topo: Num. threads per package: 2 Jul 1 08:41:21.958762 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 1 08:41:21.958768 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 1 08:41:21.958774 kernel: Booting paravirtualized kernel on Hyper-V Jul 1 08:41:21.958780 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 1 08:41:21.958786 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 1 08:41:21.958793 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 1 08:41:21.958800 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 1 08:41:21.958807 kernel: pcpu-alloc: [0] 0 1 Jul 1 08:41:21.958835 kernel: Hyper-V: PV spinlocks enabled Jul 1 08:41:21.958842 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 1 08:41:21.958849 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:41:21.958856 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 1 08:41:21.958862 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 1 08:41:21.958869 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 1 08:41:21.958876 kernel: Fallback order for Node 0: 0 Jul 1 08:41:21.958883 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jul 1 08:41:21.958892 kernel: Policy zone: Normal Jul 1 08:41:21.958899 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 1 08:41:21.958906 kernel: software IO TLB: area num 2. Jul 1 08:41:21.958913 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 1 08:41:21.958921 kernel: ftrace: allocating 40095 entries in 157 pages Jul 1 08:41:21.958928 kernel: ftrace: allocated 157 pages with 5 groups Jul 1 08:41:21.958935 kernel: Dynamic Preempt: voluntary Jul 1 08:41:21.958942 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 1 08:41:21.958950 kernel: rcu: RCU event tracing is enabled. Jul 1 08:41:21.958965 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 1 08:41:21.958973 kernel: Trampoline variant of Tasks RCU enabled. Jul 1 08:41:21.958980 kernel: Rude variant of Tasks RCU enabled. Jul 1 08:41:21.958989 kernel: Tracing variant of Tasks RCU enabled. Jul 1 08:41:21.958997 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 1 08:41:21.959004 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 1 08:41:21.959012 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 1 08:41:21.959020 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 1 08:41:21.959027 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 1 08:41:21.959034 kernel: Using NULL legacy PIC Jul 1 08:41:21.959043 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 1 08:41:21.959051 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 1 08:41:21.959058 kernel: Console: colour dummy device 80x25 Jul 1 08:41:21.959066 kernel: printk: legacy console [tty1] enabled Jul 1 08:41:21.959073 kernel: printk: legacy console [ttyS0] enabled Jul 1 08:41:21.959080 kernel: printk: legacy bootconsole [earlyser0] disabled Jul 1 08:41:21.959087 kernel: ACPI: Core revision 20240827 Jul 1 08:41:21.959095 kernel: Failed to register legacy timer interrupt Jul 1 08:41:21.959102 kernel: APIC: Switch to symmetric I/O mode setup Jul 1 08:41:21.959109 kernel: x2apic enabled Jul 1 08:41:21.959117 kernel: APIC: Switched APIC routing to: physical x2apic Jul 1 08:41:21.959124 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 1 08:41:21.959131 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 1 08:41:21.959139 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jul 1 08:41:21.959146 kernel: Hyper-V: Using IPI hypercalls Jul 1 08:41:21.959154 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 1 08:41:21.959162 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 1 08:41:21.959170 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 1 08:41:21.959178 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 1 08:41:21.959185 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 1 08:41:21.959193 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 1 08:41:21.959201 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 1 08:41:21.959208 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jul 1 08:41:21.959216 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 1 08:41:21.959233 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 1 08:41:21.959240 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 1 08:41:21.959248 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 1 08:41:21.959253 kernel: Spectre V2 : Mitigation: Retpolines Jul 1 08:41:21.959261 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 1 08:41:21.959268 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 1 08:41:21.959276 kernel: RETBleed: Vulnerable Jul 1 08:41:21.959283 kernel: Speculative Store Bypass: Vulnerable Jul 1 08:41:21.959292 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 1 08:41:21.959334 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 1 08:41:21.959342 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 1 08:41:21.959352 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 1 08:41:21.959359 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 1 08:41:21.959367 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 1 08:41:21.959374 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 1 08:41:21.959382 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jul 1 08:41:21.959390 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jul 1 08:41:21.959398 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jul 1 08:41:21.959406 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 1 08:41:21.959414 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 1 08:41:21.959421 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 1 08:41:21.959429 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 1 08:41:21.959438 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jul 1 08:41:21.959445 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jul 1 08:41:21.959453 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jul 1 08:41:21.959460 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jul 1 08:41:21.959469 kernel: Freeing SMP alternatives memory: 32K Jul 1 08:41:21.959476 kernel: pid_max: default: 32768 minimum: 301 Jul 1 08:41:21.959484 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 1 08:41:21.959491 kernel: landlock: Up and running. Jul 1 08:41:21.959499 kernel: SELinux: Initializing. Jul 1 08:41:21.959506 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 1 08:41:21.959514 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 1 08:41:21.959522 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jul 1 08:41:21.959531 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jul 1 08:41:21.959539 kernel: signal: max sigframe size: 11952 Jul 1 08:41:21.959547 kernel: rcu: Hierarchical SRCU implementation. Jul 1 08:41:21.959555 kernel: rcu: Max phase no-delay instances is 400. Jul 1 08:41:21.959563 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 1 08:41:21.959571 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 1 08:41:21.959580 kernel: smp: Bringing up secondary CPUs ... Jul 1 08:41:21.959588 kernel: smpboot: x86: Booting SMP configuration: Jul 1 08:41:21.959595 kernel: .... node #0, CPUs: #1 Jul 1 08:41:21.959605 kernel: smp: Brought up 1 node, 2 CPUs Jul 1 08:41:21.959612 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jul 1 08:41:21.959621 kernel: Memory: 8077272K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54508K init, 2460K bss, 299996K reserved, 0K cma-reserved) Jul 1 08:41:21.959629 kernel: devtmpfs: initialized Jul 1 08:41:21.959637 kernel: x86/mm: Memory block size: 128MB Jul 1 08:41:21.959645 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 1 08:41:21.959653 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 1 08:41:21.959660 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 1 08:41:21.959668 kernel: pinctrl core: initialized pinctrl subsystem Jul 1 08:41:21.959677 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 1 08:41:21.959685 kernel: audit: initializing netlink subsys (disabled) Jul 1 08:41:21.959693 kernel: audit: type=2000 audit(1751359279.029:1): state=initialized audit_enabled=0 res=1 Jul 1 08:41:21.959700 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 1 08:41:21.959708 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 1 08:41:21.959715 kernel: cpuidle: using governor menu Jul 1 08:41:21.959723 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 1 08:41:21.959730 kernel: dca service started, version 1.12.1 Jul 1 08:41:21.959738 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jul 1 08:41:21.959747 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jul 1 08:41:21.959754 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 1 08:41:21.959762 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 1 08:41:21.959770 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 1 08:41:21.959777 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 1 08:41:21.959785 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 1 08:41:21.959793 kernel: ACPI: Added _OSI(Module Device) Jul 1 08:41:21.959800 kernel: ACPI: Added _OSI(Processor Device) Jul 1 08:41:21.959809 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 1 08:41:21.959844 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 1 08:41:21.959852 kernel: ACPI: Interpreter enabled Jul 1 08:41:21.959860 kernel: ACPI: PM: (supports S0 S5) Jul 1 08:41:21.959868 kernel: ACPI: Using IOAPIC for interrupt routing Jul 1 08:41:21.959875 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 1 08:41:21.959882 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 1 08:41:21.959890 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 1 08:41:21.959898 kernel: iommu: Default domain type: Translated Jul 1 08:41:21.959906 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 1 08:41:21.959915 kernel: efivars: Registered efivars operations Jul 1 08:41:21.959922 kernel: PCI: Using ACPI for IRQ routing Jul 1 08:41:21.959930 kernel: PCI: System does not support PCI Jul 1 08:41:21.959938 kernel: vgaarb: loaded Jul 1 08:41:21.959946 kernel: clocksource: Switched to clocksource tsc-early Jul 1 08:41:21.959953 kernel: VFS: Disk quotas dquot_6.6.0 Jul 1 08:41:21.959960 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 1 08:41:21.959968 kernel: pnp: PnP ACPI init Jul 1 08:41:21.959975 kernel: pnp: PnP ACPI: found 3 devices Jul 1 08:41:21.959984 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 1 08:41:21.959991 kernel: NET: Registered PF_INET protocol family Jul 1 08:41:21.959999 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 1 08:41:21.960007 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 1 08:41:21.960015 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 1 08:41:21.960023 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 1 08:41:21.960030 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 1 08:41:21.960038 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 1 08:41:21.960047 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 1 08:41:21.960055 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 1 08:41:21.960062 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 1 08:41:21.960070 kernel: NET: Registered PF_XDP protocol family Jul 1 08:41:21.960077 kernel: PCI: CLS 0 bytes, default 64 Jul 1 08:41:21.960085 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 1 08:41:21.960093 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Jul 1 08:41:21.960100 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jul 1 08:41:21.960108 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jul 1 08:41:21.960117 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 1 08:41:21.960125 kernel: clocksource: Switched to clocksource tsc Jul 1 08:41:21.960132 kernel: Initialise system trusted keyrings Jul 1 08:41:21.960139 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 1 08:41:21.960147 kernel: Key type asymmetric registered Jul 1 08:41:21.960154 kernel: Asymmetric key parser 'x509' registered Jul 1 08:41:21.960162 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 1 08:41:21.960169 kernel: io scheduler mq-deadline registered Jul 1 08:41:21.960177 kernel: io scheduler kyber registered Jul 1 08:41:21.960186 kernel: io scheduler bfq registered Jul 1 08:41:21.960194 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 1 08:41:21.960202 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 1 08:41:21.960209 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 1 08:41:21.960216 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 1 08:41:21.960224 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jul 1 08:41:21.960231 kernel: i8042: PNP: No PS/2 controller found. Jul 1 08:41:21.960349 kernel: rtc_cmos 00:02: registered as rtc0 Jul 1 08:41:21.960422 kernel: rtc_cmos 00:02: setting system clock to 2025-07-01T08:41:21 UTC (1751359281) Jul 1 08:41:21.960484 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 1 08:41:21.960495 kernel: intel_pstate: Intel P-state driver initializing Jul 1 08:41:21.960503 kernel: efifb: probing for efifb Jul 1 08:41:21.960511 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 1 08:41:21.960520 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 1 08:41:21.960528 kernel: efifb: scrolling: redraw Jul 1 08:41:21.960536 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 1 08:41:21.960544 kernel: Console: switching to colour frame buffer device 128x48 Jul 1 08:41:21.960554 kernel: fb0: EFI VGA frame buffer device Jul 1 08:41:21.960561 kernel: pstore: Using crash dump compression: deflate Jul 1 08:41:21.960570 kernel: pstore: Registered efi_pstore as persistent store backend Jul 1 08:41:21.960578 kernel: NET: Registered PF_INET6 protocol family Jul 1 08:41:21.960586 kernel: Segment Routing with IPv6 Jul 1 08:41:21.960593 kernel: In-situ OAM (IOAM) with IPv6 Jul 1 08:41:21.960602 kernel: NET: Registered PF_PACKET protocol family Jul 1 08:41:21.960611 kernel: Key type dns_resolver registered Jul 1 08:41:21.960621 kernel: IPI shorthand broadcast: enabled Jul 1 08:41:21.960633 kernel: sched_clock: Marking stable (2784003615, 88525452)->(3202374293, -329845226) Jul 1 08:41:21.960642 kernel: registered taskstats version 1 Jul 1 08:41:21.960653 kernel: Loading compiled-in X.509 certificates Jul 1 08:41:21.960662 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: bdab85da21e6e40e781d68d3bf17f0a40ee7357c' Jul 1 08:41:21.960672 kernel: Demotion targets for Node 0: null Jul 1 08:41:21.960682 kernel: Key type .fscrypt registered Jul 1 08:41:21.960690 kernel: Key type fscrypt-provisioning registered Jul 1 08:41:21.960700 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 1 08:41:21.960710 kernel: ima: Allocated hash algorithm: sha1 Jul 1 08:41:21.960722 kernel: ima: No architecture policies found Jul 1 08:41:21.960733 kernel: clk: Disabling unused clocks Jul 1 08:41:21.960743 kernel: Warning: unable to open an initial console. Jul 1 08:41:21.960752 kernel: Freeing unused kernel image (initmem) memory: 54508K Jul 1 08:41:21.960762 kernel: Write protecting the kernel read-only data: 24576k Jul 1 08:41:21.960771 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 1 08:41:21.960780 kernel: Run /init as init process Jul 1 08:41:21.960789 kernel: with arguments: Jul 1 08:41:21.960799 kernel: /init Jul 1 08:41:21.960810 kernel: with environment: Jul 1 08:41:21.963116 kernel: HOME=/ Jul 1 08:41:21.963126 kernel: TERM=linux Jul 1 08:41:21.963135 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 1 08:41:21.963145 systemd[1]: Successfully made /usr/ read-only. Jul 1 08:41:21.963156 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 1 08:41:21.963166 systemd[1]: Detected virtualization microsoft. Jul 1 08:41:21.963177 systemd[1]: Detected architecture x86-64. Jul 1 08:41:21.963185 systemd[1]: Running in initrd. Jul 1 08:41:21.963193 systemd[1]: No hostname configured, using default hostname. Jul 1 08:41:21.963202 systemd[1]: Hostname set to . Jul 1 08:41:21.963210 systemd[1]: Initializing machine ID from random generator. Jul 1 08:41:21.963218 systemd[1]: Queued start job for default target initrd.target. Jul 1 08:41:21.963226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:41:21.963234 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:41:21.963246 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 1 08:41:21.963254 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 08:41:21.963262 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 1 08:41:21.963271 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 1 08:41:21.963281 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 1 08:41:21.963289 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 1 08:41:21.963297 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:41:21.963307 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:41:21.963315 systemd[1]: Reached target paths.target - Path Units. Jul 1 08:41:21.963324 systemd[1]: Reached target slices.target - Slice Units. Jul 1 08:41:21.963332 systemd[1]: Reached target swap.target - Swaps. Jul 1 08:41:21.963341 systemd[1]: Reached target timers.target - Timer Units. Jul 1 08:41:21.963349 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 08:41:21.963357 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 08:41:21.963365 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 1 08:41:21.963373 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 1 08:41:21.963382 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:41:21.963390 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 08:41:21.963399 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:41:21.963407 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 08:41:21.963416 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 1 08:41:21.963424 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 08:41:21.963432 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 1 08:41:21.963441 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 1 08:41:21.963451 systemd[1]: Starting systemd-fsck-usr.service... Jul 1 08:41:21.963460 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 08:41:21.963468 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 08:41:21.963484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:41:21.963494 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 1 08:41:21.963524 systemd-journald[205]: Collecting audit messages is disabled. Jul 1 08:41:21.963548 systemd-journald[205]: Journal started Jul 1 08:41:21.963573 systemd-journald[205]: Runtime Journal (/run/log/journal/4680f141104344fca41a32671892e3ce) is 8M, max 158.9M, 150.9M free. Jul 1 08:41:21.957790 systemd-modules-load[206]: Inserted module 'overlay' Jul 1 08:41:21.966890 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:41:21.970833 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 08:41:21.973423 systemd[1]: Finished systemd-fsck-usr.service. Jul 1 08:41:21.978916 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 1 08:41:21.983979 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 1 08:41:21.994292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:41:22.002572 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 1 08:41:22.002592 kernel: Bridge firewalling registered Jul 1 08:41:21.997489 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 1 08:41:22.003496 systemd-modules-load[206]: Inserted module 'br_netfilter' Jul 1 08:41:22.005390 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 1 08:41:22.016900 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 08:41:22.017369 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 1 08:41:22.017629 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:41:22.019140 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 08:41:22.024722 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 08:41:22.039493 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:41:22.041805 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 08:41:22.045660 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 08:41:22.048879 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:41:22.054161 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 1 08:41:22.074619 systemd-resolved[245]: Positive Trust Anchors: Jul 1 08:41:22.074631 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 08:41:22.080809 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:41:22.074662 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 1 08:41:22.091133 systemd-resolved[245]: Defaulting to hostname 'linux'. Jul 1 08:41:22.103439 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 08:41:22.109181 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:41:22.136830 kernel: SCSI subsystem initialized Jul 1 08:41:22.142827 kernel: Loading iSCSI transport class v2.0-870. Jul 1 08:41:22.150851 kernel: iscsi: registered transport (tcp) Jul 1 08:41:22.166022 kernel: iscsi: registered transport (qla4xxx) Jul 1 08:41:22.166063 kernel: QLogic iSCSI HBA Driver Jul 1 08:41:22.177387 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 1 08:41:22.185775 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:41:22.186458 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 1 08:41:22.213182 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 1 08:41:22.216506 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 1 08:41:22.257830 kernel: raid6: avx512x4 gen() 45208 MB/s Jul 1 08:41:22.274824 kernel: raid6: avx512x2 gen() 44692 MB/s Jul 1 08:41:22.291823 kernel: raid6: avx512x1 gen() 29868 MB/s Jul 1 08:41:22.308822 kernel: raid6: avx2x4 gen() 40829 MB/s Jul 1 08:41:22.326824 kernel: raid6: avx2x2 gen() 43561 MB/s Jul 1 08:41:22.344424 kernel: raid6: avx2x1 gen() 31029 MB/s Jul 1 08:41:22.344450 kernel: raid6: using algorithm avx512x4 gen() 45208 MB/s Jul 1 08:41:22.362825 kernel: raid6: .... xor() 7710 MB/s, rmw enabled Jul 1 08:41:22.362844 kernel: raid6: using avx512x2 recovery algorithm Jul 1 08:41:22.378829 kernel: xor: automatically using best checksumming function avx Jul 1 08:41:22.481831 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 1 08:41:22.485545 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 1 08:41:22.487363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:41:22.502932 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jul 1 08:41:22.506498 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:41:22.510152 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 1 08:41:22.535208 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Jul 1 08:41:22.552088 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 08:41:22.555097 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 08:41:22.583119 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:41:22.588252 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 1 08:41:22.631255 kernel: cryptd: max_cpu_qlen set to 1000 Jul 1 08:41:22.647832 kernel: hv_vmbus: Vmbus version:5.3 Jul 1 08:41:22.651903 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:41:22.652045 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:41:22.657257 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:41:22.666353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:41:22.676436 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 1 08:41:22.676475 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 1 08:41:22.676488 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 1 08:41:22.677213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:41:22.677285 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:41:22.685908 kernel: AES CTR mode by8 optimization enabled Jul 1 08:41:22.686090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:41:22.694875 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 1 08:41:22.703263 kernel: hv_vmbus: registering driver hv_netvsc Jul 1 08:41:22.703293 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 1 08:41:22.704836 kernel: PTP clock support registered Jul 1 08:41:22.714828 kernel: hv_vmbus: registering driver hv_pci Jul 1 08:41:22.720516 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52878398 (unnamed net_device) (uninitialized): VF slot 1 added Jul 1 08:41:22.720676 kernel: hv_vmbus: registering driver hv_storvsc Jul 1 08:41:22.724529 kernel: hv_utils: Registering HyperV Utility Driver Jul 1 08:41:22.724563 kernel: hv_vmbus: registering driver hv_utils Jul 1 08:41:22.726635 kernel: hv_utils: Shutdown IC version 3.2 Jul 1 08:41:22.726557 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:41:22.735916 kernel: hv_vmbus: registering driver hid_hyperv Jul 1 08:41:22.735938 kernel: hv_utils: Heartbeat IC version 3.0 Jul 1 08:41:22.735950 kernel: hv_utils: TimeSync IC version 4.0 Jul 1 08:41:22.969337 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jul 1 08:41:22.969576 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 1 08:41:22.969590 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 1 08:41:22.969396 systemd-resolved[245]: Clock change detected. Flushing caches. Jul 1 08:41:22.973816 kernel: scsi host0: storvsc_host_t Jul 1 08:41:22.973962 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jul 1 08:41:22.978498 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 1 08:41:22.978542 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jul 1 08:41:22.982841 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jul 1 08:41:22.988124 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jul 1 08:41:22.990911 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jul 1 08:41:23.006869 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jul 1 08:41:23.010862 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jul 1 08:41:23.011038 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jul 1 08:41:23.030638 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 1 08:41:23.030805 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 1 08:41:23.031766 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 1 08:41:23.033871 kernel: nvme nvme0: pci function c05b:00:00.0 Jul 1 08:41:23.036182 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jul 1 08:41:23.051877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 1 08:41:23.066774 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#69 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 1 08:41:23.297855 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 1 08:41:23.302781 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 08:41:23.528775 kernel: nvme nvme0: using unchecked data buffer Jul 1 08:41:23.851881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 1 08:41:23.883889 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jul 1 08:41:23.906543 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jul 1 08:41:23.942250 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 1 08:41:23.946814 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 1 08:41:23.949716 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 1 08:41:23.951392 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 08:41:23.954834 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:41:23.955974 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 08:41:23.961250 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 1 08:41:23.975337 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 1 08:41:23.983777 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jul 1 08:41:23.988544 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 1 08:41:23.995621 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jul 1 08:41:23.996729 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jul 1 08:41:23.996822 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jul 1 08:41:23.996888 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jul 1 08:41:23.996907 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 08:41:24.002192 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jul 1 08:41:24.004762 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 08:41:24.015616 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jul 1 08:41:24.015655 kernel: pci 7870:00:00.0: enabling Extended Tags Jul 1 08:41:24.033833 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jul 1 08:41:24.033971 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jul 1 08:41:24.034104 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jul 1 08:41:24.054476 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jul 1 08:41:24.068771 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jul 1 08:41:24.074810 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52878398 eth0: VF registering: eth1 Jul 1 08:41:24.074986 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jul 1 08:41:24.079780 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jul 1 08:41:25.012832 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 08:41:25.012896 disk-uuid[675]: The operation has completed successfully. Jul 1 08:41:25.063330 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 1 08:41:25.063410 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 1 08:41:25.090942 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 1 08:41:25.102612 sh[709]: Success Jul 1 08:41:25.123839 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 1 08:41:25.123890 kernel: device-mapper: uevent: version 1.0.3 Jul 1 08:41:25.124869 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 1 08:41:25.132770 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 1 08:41:25.539086 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 1 08:41:25.542994 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 1 08:41:25.556495 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 1 08:41:25.567776 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 1 08:41:25.570763 kernel: BTRFS: device fsid aeab36fb-d8a9-440c-a872-a8cce0218739 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (722) Jul 1 08:41:25.574077 kernel: BTRFS info (device dm-0): first mount of filesystem aeab36fb-d8a9-440c-a872-a8cce0218739 Jul 1 08:41:25.574113 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:41:25.575152 kernel: BTRFS info (device dm-0): using free-space-tree Jul 1 08:41:26.081665 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 1 08:41:26.083860 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 1 08:41:26.087731 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 1 08:41:26.088370 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 1 08:41:26.097250 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 1 08:41:26.114763 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (745) Jul 1 08:41:26.118104 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:41:26.118140 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:41:26.118150 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 1 08:41:26.156800 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:41:26.156970 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 1 08:41:26.162860 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 1 08:41:26.176576 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 08:41:26.180449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 08:41:26.200427 systemd-networkd[891]: lo: Link UP Jul 1 08:41:26.200433 systemd-networkd[891]: lo: Gained carrier Jul 1 08:41:26.210580 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 1 08:41:26.210777 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 1 08:41:26.211040 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52878398 eth0: Data path switched to VF: enP30832s1 Jul 1 08:41:26.201879 systemd-networkd[891]: Enumeration completed Jul 1 08:41:26.202326 systemd-networkd[891]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:41:26.202329 systemd-networkd[891]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 08:41:26.202534 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 08:41:26.207028 systemd[1]: Reached target network.target - Network. Jul 1 08:41:26.213854 systemd-networkd[891]: enP30832s1: Link UP Jul 1 08:41:26.213909 systemd-networkd[891]: eth0: Link UP Jul 1 08:41:26.213990 systemd-networkd[891]: eth0: Gained carrier Jul 1 08:41:26.213998 systemd-networkd[891]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:41:26.220952 systemd-networkd[891]: enP30832s1: Gained carrier Jul 1 08:41:26.240772 systemd-networkd[891]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 1 08:41:27.336368 systemd-networkd[891]: eth0: Gained IPv6LL Jul 1 08:41:27.819734 ignition[872]: Ignition 2.21.0 Jul 1 08:41:27.819746 ignition[872]: Stage: fetch-offline Jul 1 08:41:27.822008 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 08:41:27.819848 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:41:27.824676 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 1 08:41:27.819855 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 1 08:41:27.819930 ignition[872]: parsed url from cmdline: "" Jul 1 08:41:27.819933 ignition[872]: no config URL provided Jul 1 08:41:27.819937 ignition[872]: reading system config file "/usr/lib/ignition/user.ign" Jul 1 08:41:27.819941 ignition[872]: no config at "/usr/lib/ignition/user.ign" Jul 1 08:41:27.819946 ignition[872]: failed to fetch config: resource requires networking Jul 1 08:41:27.820837 ignition[872]: Ignition finished successfully Jul 1 08:41:27.851838 ignition[901]: Ignition 2.21.0 Jul 1 08:41:27.851847 ignition[901]: Stage: fetch Jul 1 08:41:27.852072 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:41:27.852079 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 1 08:41:27.852157 ignition[901]: parsed url from cmdline: "" Jul 1 08:41:27.852163 ignition[901]: no config URL provided Jul 1 08:41:27.852167 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Jul 1 08:41:27.852172 ignition[901]: no config at "/usr/lib/ignition/user.ign" Jul 1 08:41:27.852197 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 1 08:41:27.910924 systemd-networkd[891]: enP30832s1: Gained IPv6LL Jul 1 08:41:27.923146 ignition[901]: GET result: OK Jul 1 08:41:27.923206 ignition[901]: config has been read from IMDS userdata Jul 1 08:41:27.923251 ignition[901]: parsing config with SHA512: 8964c0bf91f6441acbb3784381635b4a565f0375e3aa078ea89dee12d9cd78f482d3397ca07d13c70cc457c63e82aee1f0cba9553b64ccb23a3474de8025c1bf Jul 1 08:41:27.926892 unknown[901]: fetched base config from "system" Jul 1 08:41:27.926900 unknown[901]: fetched base config from "system" Jul 1 08:41:27.926904 unknown[901]: fetched user config from "azure" Jul 1 08:41:27.928865 ignition[901]: fetch: fetch complete Jul 1 08:41:27.928868 ignition[901]: fetch: fetch passed Jul 1 08:41:27.928898 ignition[901]: Ignition finished successfully Jul 1 08:41:27.933249 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 1 08:41:27.936973 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 1 08:41:27.958367 ignition[907]: Ignition 2.21.0 Jul 1 08:41:27.958376 ignition[907]: Stage: kargs Jul 1 08:41:27.960316 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 1 08:41:27.958541 ignition[907]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:41:27.965719 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 1 08:41:27.958548 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 1 08:41:27.959183 ignition[907]: kargs: kargs passed Jul 1 08:41:27.959213 ignition[907]: Ignition finished successfully Jul 1 08:41:27.989960 ignition[913]: Ignition 2.21.0 Jul 1 08:41:27.989968 ignition[913]: Stage: disks Jul 1 08:41:27.990133 ignition[913]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:41:27.992493 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 1 08:41:27.990140 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 1 08:41:27.995970 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 1 08:41:27.990938 ignition[913]: disks: disks passed Jul 1 08:41:27.998216 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 1 08:41:27.990967 ignition[913]: Ignition finished successfully Jul 1 08:41:28.000821 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 08:41:28.004805 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 08:41:28.008782 systemd[1]: Reached target basic.target - Basic System. Jul 1 08:41:28.011442 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 1 08:41:28.107809 systemd-fsck[921]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 1 08:41:28.111471 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 1 08:41:28.115539 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 1 08:41:28.534823 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 18421243-07cc-41b2-b496-d6a2cef84352 r/w with ordered data mode. Quota mode: none. Jul 1 08:41:28.535468 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 1 08:41:28.537821 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 1 08:41:28.566815 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 08:41:28.580827 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 1 08:41:28.585947 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 1 08:41:28.589047 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 1 08:41:28.598795 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (930) Jul 1 08:41:28.589083 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 08:41:28.604883 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:41:28.604903 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:41:28.604932 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 1 08:41:28.593797 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 1 08:41:28.606663 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 1 08:41:28.610549 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 08:41:29.605173 coreos-metadata[932]: Jul 01 08:41:29.605 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 1 08:41:29.609465 coreos-metadata[932]: Jul 01 08:41:29.609 INFO Fetch successful Jul 1 08:41:29.611821 coreos-metadata[932]: Jul 01 08:41:29.610 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 1 08:41:29.620381 coreos-metadata[932]: Jul 01 08:41:29.620 INFO Fetch successful Jul 1 08:41:29.625030 coreos-metadata[932]: Jul 01 08:41:29.624 INFO wrote hostname ci-9999.9.9-s-875ad0e937 to /sysroot/etc/hostname Jul 1 08:41:29.627905 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 1 08:41:30.196531 initrd-setup-root[961]: cut: /sysroot/etc/passwd: No such file or directory Jul 1 08:41:30.269204 initrd-setup-root[968]: cut: /sysroot/etc/group: No such file or directory Jul 1 08:41:30.299940 initrd-setup-root[975]: cut: /sysroot/etc/shadow: No such file or directory Jul 1 08:41:30.304268 initrd-setup-root[982]: cut: /sysroot/etc/gshadow: No such file or directory Jul 1 08:41:31.438221 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 1 08:41:31.442019 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 1 08:41:31.446171 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 1 08:41:31.457214 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 1 08:41:31.460963 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:41:31.480353 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 1 08:41:31.485392 ignition[1049]: INFO : Ignition 2.21.0 Jul 1 08:41:31.485392 ignition[1049]: INFO : Stage: mount Jul 1 08:41:31.485392 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:41:31.485392 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 1 08:41:31.496844 ignition[1049]: INFO : mount: mount passed Jul 1 08:41:31.496844 ignition[1049]: INFO : Ignition finished successfully Jul 1 08:41:31.489613 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 1 08:41:31.495466 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 1 08:41:31.511139 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 08:41:31.532766 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1061) Jul 1 08:41:31.532799 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:41:31.534817 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:41:31.535854 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 1 08:41:31.549976 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 08:41:31.574701 ignition[1078]: INFO : Ignition 2.21.0 Jul 1 08:41:31.574701 ignition[1078]: INFO : Stage: files Jul 1 08:41:31.577986 ignition[1078]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:41:31.577986 ignition[1078]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 1 08:41:31.577986 ignition[1078]: DEBUG : files: compiled without relabeling support, skipping Jul 1 08:41:31.587041 ignition[1078]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 1 08:41:31.587041 ignition[1078]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 1 08:41:31.610829 ignition[1078]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 1 08:41:31.612625 ignition[1078]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 1 08:41:31.612625 ignition[1078]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 1 08:41:31.611147 unknown[1078]: wrote ssh authorized keys file for user: core Jul 1 08:41:31.698911 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 1 08:41:31.702834 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 1 08:41:51.295047 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 1 08:41:51.814193 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 1 08:41:51.818866 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 1 08:41:51.818866 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 1 08:41:51.818866 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 1 08:41:51.818866 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 1 08:41:51.818866 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 08:41:51.818866 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 08:41:51.818866 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 08:41:51.818866 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 08:41:51.840809 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 08:41:51.840809 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 08:41:51.840809 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 1 08:41:51.840809 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 1 08:41:51.840809 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 1 08:41:51.840809 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 1 08:41:52.711690 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 1 08:41:53.308763 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 1 08:41:53.308763 ignition[1078]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 1 08:41:53.324036 ignition[1078]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 08:41:53.329907 ignition[1078]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 08:41:53.329907 ignition[1078]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 1 08:41:53.329907 ignition[1078]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 1 08:41:53.340562 ignition[1078]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 1 08:41:53.340562 ignition[1078]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 1 08:41:53.340562 ignition[1078]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 1 08:41:53.340562 ignition[1078]: INFO : files: files passed Jul 1 08:41:53.340562 ignition[1078]: INFO : Ignition finished successfully Jul 1 08:41:53.333376 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 1 08:41:53.337337 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 1 08:41:53.349383 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 1 08:41:53.353163 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 1 08:41:53.353247 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 1 08:41:53.371259 initrd-setup-root-after-ignition[1107]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:41:53.371259 initrd-setup-root-after-ignition[1107]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:41:53.377488 initrd-setup-root-after-ignition[1111]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:41:53.374641 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 08:41:53.379709 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 1 08:41:53.382839 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 1 08:41:53.421181 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 1 08:41:53.421264 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 1 08:41:53.424975 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 1 08:41:53.427423 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 1 08:41:53.428528 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 1 08:41:53.430238 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 1 08:41:53.444437 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 08:41:53.448439 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 1 08:41:53.463659 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:41:53.464139 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:41:53.464393 systemd[1]: Stopped target timers.target - Timer Units. Jul 1 08:41:53.464876 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 1 08:41:53.464999 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 08:41:53.465382 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 1 08:41:53.465675 systemd[1]: Stopped target basic.target - Basic System. Jul 1 08:41:53.472049 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 1 08:41:53.474639 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 08:41:53.478882 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 1 08:41:53.482588 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 1 08:41:53.485887 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 1 08:41:53.488740 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 08:41:53.492818 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 1 08:41:53.502904 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 1 08:41:53.504081 systemd[1]: Stopped target swap.target - Swaps. Jul 1 08:41:53.504207 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 1 08:41:53.504301 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 1 08:41:53.508843 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:41:53.511893 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:41:53.512819 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 1 08:41:53.513210 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:41:53.513300 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 1 08:41:53.513409 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 1 08:41:53.513903 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 1 08:41:53.513998 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 08:41:53.514174 systemd[1]: ignition-files.service: Deactivated successfully. Jul 1 08:41:53.514257 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 1 08:41:53.514500 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 1 08:41:53.514583 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 1 08:41:53.515642 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 1 08:41:53.542382 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 1 08:41:53.546486 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 1 08:41:53.546624 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:41:53.562796 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 1 08:41:53.562928 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 08:41:53.568101 ignition[1131]: INFO : Ignition 2.21.0 Jul 1 08:41:53.568101 ignition[1131]: INFO : Stage: umount Jul 1 08:41:53.568101 ignition[1131]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:41:53.568101 ignition[1131]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 1 08:41:53.568277 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 1 08:41:53.591794 ignition[1131]: INFO : umount: umount passed Jul 1 08:41:53.591794 ignition[1131]: INFO : Ignition finished successfully Jul 1 08:41:53.568350 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 1 08:41:53.572867 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 1 08:41:53.572931 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 1 08:41:53.576920 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 1 08:41:53.576959 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 1 08:41:53.577080 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 1 08:41:53.577107 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 1 08:41:53.577341 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 1 08:41:53.577368 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 1 08:41:53.577400 systemd[1]: Stopped target network.target - Network. Jul 1 08:41:53.577506 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 1 08:41:53.577528 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 08:41:53.586815 systemd[1]: Stopped target paths.target - Path Units. Jul 1 08:41:53.588890 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 1 08:41:53.593318 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:41:53.595130 systemd[1]: Stopped target slices.target - Slice Units. Jul 1 08:41:53.599029 systemd[1]: Stopped target sockets.target - Socket Units. Jul 1 08:41:53.603505 systemd[1]: iscsid.socket: Deactivated successfully. Jul 1 08:41:53.603531 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 08:41:53.608583 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 1 08:41:53.608610 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 08:41:53.612702 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 1 08:41:53.612770 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 1 08:41:53.615998 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 1 08:41:53.617528 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 1 08:41:53.620823 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 1 08:41:53.627078 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 1 08:41:53.630057 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 1 08:41:53.630140 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 1 08:41:53.635618 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 1 08:41:53.635773 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 1 08:41:53.635866 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 1 08:41:53.642362 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 1 08:41:53.642915 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 1 08:41:53.660901 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 1 08:41:53.660939 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:41:53.664831 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 1 08:41:53.666086 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 1 08:41:53.666139 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 08:41:53.669839 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 1 08:41:53.669886 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:41:53.673157 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 1 08:41:53.673193 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 1 08:41:53.673613 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 1 08:41:53.673645 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:41:53.697816 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52878398 eth0: Data path switched from VF: enP30832s1 Jul 1 08:41:53.697957 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 1 08:41:53.673884 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:41:53.675035 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 1 08:41:53.675084 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 1 08:41:53.688003 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 1 08:41:53.688139 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:41:53.692167 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 1 08:41:53.692227 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 1 08:41:53.695562 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 1 08:41:53.696128 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:41:53.702302 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 1 08:41:53.702351 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 1 08:41:53.708459 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 1 08:41:53.708506 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 1 08:41:53.710157 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 1 08:41:53.710191 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 08:41:53.719384 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 1 08:41:53.723803 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 1 08:41:53.723862 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:41:53.725359 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 1 08:41:53.725398 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:41:53.728828 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 1 08:41:53.728871 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 1 08:41:53.736957 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 1 08:41:53.736993 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:41:53.738955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:41:53.738991 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:41:53.746655 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 1 08:41:53.746705 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 1 08:41:53.746727 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 1 08:41:53.746744 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 1 08:41:53.746818 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 1 08:41:53.747171 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 1 08:41:53.747230 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 1 08:41:53.750418 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 1 08:41:53.750473 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 1 08:41:54.053139 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 1 08:41:54.053280 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 1 08:41:54.057395 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 1 08:41:54.059524 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 1 08:41:54.059585 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 1 08:41:54.064670 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 1 08:41:54.080921 systemd[1]: Switching root. Jul 1 08:41:54.120094 systemd-journald[205]: Journal stopped Jul 1 08:41:56.689435 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jul 1 08:41:56.689461 kernel: SELinux: policy capability network_peer_controls=1 Jul 1 08:41:56.689472 kernel: SELinux: policy capability open_perms=1 Jul 1 08:41:56.689480 kernel: SELinux: policy capability extended_socket_class=1 Jul 1 08:41:56.689488 kernel: SELinux: policy capability always_check_network=0 Jul 1 08:41:56.689496 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 1 08:41:56.689506 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 1 08:41:56.689516 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 1 08:41:56.689524 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 1 08:41:56.689532 kernel: SELinux: policy capability userspace_initial_context=0 Jul 1 08:41:56.689540 kernel: audit: type=1403 audit(1751359314.825:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 1 08:41:56.689549 systemd[1]: Successfully loaded SELinux policy in 66.037ms. Jul 1 08:41:56.689559 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.656ms. Jul 1 08:41:56.689570 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 1 08:41:56.689581 systemd[1]: Detected virtualization microsoft. Jul 1 08:41:56.689590 systemd[1]: Detected architecture x86-64. Jul 1 08:41:56.689598 systemd[1]: Detected first boot. Jul 1 08:41:56.689607 systemd[1]: Hostname set to . Jul 1 08:41:56.689617 systemd[1]: Initializing machine ID from random generator. Jul 1 08:41:56.689626 zram_generator::config[1175]: No configuration found. Jul 1 08:41:56.689635 kernel: Guest personality initialized and is inactive Jul 1 08:41:56.689643 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jul 1 08:41:56.689652 kernel: Initialized host personality Jul 1 08:41:56.689660 kernel: NET: Registered PF_VSOCK protocol family Jul 1 08:41:56.689668 systemd[1]: Populated /etc with preset unit settings. Jul 1 08:41:56.689679 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 1 08:41:56.689688 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 1 08:41:56.689697 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 1 08:41:56.689706 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 1 08:41:56.689715 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 1 08:41:56.689724 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 1 08:41:56.689733 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 1 08:41:56.689743 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 1 08:41:56.692417 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 1 08:41:56.692433 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 1 08:41:56.692442 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 1 08:41:56.692450 systemd[1]: Created slice user.slice - User and Session Slice. Jul 1 08:41:56.692459 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:41:56.692467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:41:56.692476 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 1 08:41:56.692489 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 1 08:41:56.692500 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 1 08:41:56.692509 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 08:41:56.692519 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 1 08:41:56.692528 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:41:56.692537 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:41:56.692545 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 1 08:41:56.692553 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 1 08:41:56.692563 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 1 08:41:56.692572 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 1 08:41:56.692580 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:41:56.692589 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 08:41:56.692598 systemd[1]: Reached target slices.target - Slice Units. Jul 1 08:41:56.692606 systemd[1]: Reached target swap.target - Swaps. Jul 1 08:41:56.692639 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 1 08:41:56.692647 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 1 08:41:56.692659 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 1 08:41:56.692668 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:41:56.692677 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 08:41:56.692685 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:41:56.692694 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 1 08:41:56.692704 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 1 08:41:56.692713 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 1 08:41:56.692721 systemd[1]: Mounting media.mount - External Media Directory... Jul 1 08:41:56.692731 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:41:56.692740 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 1 08:41:56.692768 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 1 08:41:56.692779 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 1 08:41:56.692788 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 1 08:41:56.692800 systemd[1]: Reached target machines.target - Containers. Jul 1 08:41:56.692812 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 1 08:41:56.692822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:41:56.692831 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 08:41:56.692840 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 1 08:41:56.692848 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:41:56.692857 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 08:41:56.692866 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:41:56.692875 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 1 08:41:56.692886 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:41:56.692897 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 1 08:41:56.692907 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 1 08:41:56.692916 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 1 08:41:56.692924 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 1 08:41:56.692933 systemd[1]: Stopped systemd-fsck-usr.service. Jul 1 08:41:56.692942 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:41:56.692951 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 08:41:56.692963 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 08:41:56.692972 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 1 08:41:56.692981 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 1 08:41:56.692990 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 1 08:41:56.692999 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 08:41:56.693009 systemd[1]: verity-setup.service: Deactivated successfully. Jul 1 08:41:56.693018 systemd[1]: Stopped verity-setup.service. Jul 1 08:41:56.693027 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:41:56.693038 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 1 08:41:56.693048 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 1 08:41:56.693057 systemd[1]: Mounted media.mount - External Media Directory. Jul 1 08:41:56.693066 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 1 08:41:56.693096 systemd-journald[1258]: Collecting audit messages is disabled. Jul 1 08:41:56.693120 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 1 08:41:56.693129 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 1 08:41:56.693138 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:41:56.693147 kernel: loop: module loaded Jul 1 08:41:56.693156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:41:56.693165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:41:56.693173 kernel: fuse: init (API version 7.41) Jul 1 08:41:56.693181 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 1 08:41:56.693193 systemd-journald[1258]: Journal started Jul 1 08:41:56.693214 systemd-journald[1258]: Runtime Journal (/run/log/journal/a25895afc3294eb0b5b0324f72612339) is 8M, max 158.9M, 150.9M free. Jul 1 08:41:56.272069 systemd[1]: Queued start job for default target multi-user.target. Jul 1 08:41:56.283124 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 1 08:41:56.283449 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 1 08:41:56.696640 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 1 08:41:56.703778 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 08:41:56.708348 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:41:56.708499 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:41:56.712024 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 1 08:41:56.712156 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 1 08:41:56.715093 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:41:56.715225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:41:56.717048 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 08:41:56.720134 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:41:56.724515 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 1 08:41:56.726805 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 1 08:41:56.742610 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 1 08:41:56.747835 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 1 08:41:56.752911 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 1 08:41:56.756845 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 1 08:41:56.756884 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 08:41:56.759976 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 1 08:41:56.774434 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 1 08:41:56.779770 kernel: ACPI: bus type drm_connector registered Jul 1 08:41:56.777623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:41:56.778687 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 1 08:41:56.782373 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 1 08:41:56.784229 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 08:41:56.784986 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 1 08:41:56.787568 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 08:41:56.788915 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 08:41:56.792481 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 1 08:41:56.797859 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 1 08:41:56.801950 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 08:41:56.802814 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 08:41:56.804669 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:41:56.806663 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 1 08:41:56.957104 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 1 08:41:56.967926 systemd-journald[1258]: Time spent on flushing to /var/log/journal/a25895afc3294eb0b5b0324f72612339 is 13.947ms for 988 entries. Jul 1 08:41:56.967926 systemd-journald[1258]: System Journal (/var/log/journal/a25895afc3294eb0b5b0324f72612339) is 8M, max 2.6G, 2.6G free. Jul 1 08:41:58.749237 systemd-journald[1258]: Received client request to flush runtime journal. Jul 1 08:41:58.749298 kernel: loop0: detected capacity change from 0 to 146336 Jul 1 08:41:56.975846 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jul 1 08:41:56.975854 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jul 1 08:41:56.978045 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 1 08:41:57.006038 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 1 08:41:57.012949 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 1 08:41:57.015269 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 1 08:41:57.017192 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:41:57.019595 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 1 08:41:57.026643 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 1 08:41:58.750334 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 1 08:41:58.814040 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 1 08:41:58.818920 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 08:41:58.837701 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jul 1 08:41:58.837722 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jul 1 08:41:58.840336 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:41:59.009777 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 1 08:41:59.024765 kernel: loop1: detected capacity change from 0 to 224512 Jul 1 08:42:00.253772 kernel: loop2: detected capacity change from 0 to 28616 Jul 1 08:42:02.174335 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 1 08:42:02.178097 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:42:02.206892 systemd-udevd[1342]: Using default interface naming scheme 'v255'. Jul 1 08:42:02.260764 kernel: loop3: detected capacity change from 0 to 114000 Jul 1 08:42:02.700068 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 1 08:42:02.701107 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 1 08:42:02.815367 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:42:02.819320 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 08:42:02.901905 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 1 08:42:02.934775 kernel: hv_vmbus: registering driver hyperv_fb Jul 1 08:42:02.938806 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 1 08:42:02.938856 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 1 08:42:02.944666 kernel: Console: switching to colour dummy device 80x25 Jul 1 08:42:02.951103 kernel: Console: switching to colour frame buffer device 128x48 Jul 1 08:42:03.164792 kernel: hv_vmbus: registering driver hv_balloon Jul 1 08:42:03.165812 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 1 08:42:03.170773 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 1 08:42:03.258011 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 1 08:42:03.291496 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 1 08:42:03.314237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:42:03.323311 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:42:03.323678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:42:03.327936 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:42:03.342065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:42:03.342194 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:42:03.346426 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:42:03.368770 kernel: mousedev: PS/2 mouse device common for all mice Jul 1 08:42:03.759523 systemd-networkd[1349]: lo: Link UP Jul 1 08:42:03.759530 systemd-networkd[1349]: lo: Gained carrier Jul 1 08:42:03.760863 systemd-networkd[1349]: Enumeration completed Jul 1 08:42:03.761000 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 08:42:03.762716 systemd-networkd[1349]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:42:03.762727 systemd-networkd[1349]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 08:42:03.763088 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 1 08:42:03.767941 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 1 08:42:03.768947 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 1 08:42:03.771771 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 1 08:42:03.772772 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52878398 eth0: Data path switched to VF: enP30832s1 Jul 1 08:42:03.773091 systemd-networkd[1349]: enP30832s1: Link UP Jul 1 08:42:03.773155 systemd-networkd[1349]: eth0: Link UP Jul 1 08:42:03.773157 systemd-networkd[1349]: eth0: Gained carrier Jul 1 08:42:03.773170 systemd-networkd[1349]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:42:03.779931 systemd-networkd[1349]: enP30832s1: Gained carrier Jul 1 08:42:03.800772 systemd-networkd[1349]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 1 08:42:03.921350 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 1 08:42:03.929765 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 1 08:42:04.140832 kernel: loop4: detected capacity change from 0 to 146336 Jul 1 08:42:04.161646 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 1 08:42:04.165015 kernel: loop5: detected capacity change from 0 to 224512 Jul 1 08:42:04.168092 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 1 08:42:04.312331 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 1 08:42:04.906787 kernel: loop6: detected capacity change from 0 to 28616 Jul 1 08:42:04.919768 kernel: loop7: detected capacity change from 0 to 114000 Jul 1 08:42:04.930978 (sd-merge)[1436]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 1 08:42:04.931333 (sd-merge)[1436]: Merged extensions into '/usr'. Jul 1 08:42:05.000393 systemd[1]: Reload requested from client PID 1314 ('systemd-sysext') (unit systemd-sysext.service)... Jul 1 08:42:05.000409 systemd[1]: Reloading... Jul 1 08:42:05.046875 zram_generator::config[1466]: No configuration found. Jul 1 08:42:05.153597 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:42:05.222872 systemd-networkd[1349]: enP30832s1: Gained IPv6LL Jul 1 08:42:05.381250 systemd[1]: Reloading finished in 380 ms. Jul 1 08:42:05.400369 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 1 08:42:05.402200 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:42:05.411398 systemd[1]: Starting ensure-sysext.service... Jul 1 08:42:05.415935 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 1 08:42:05.433866 systemd[1]: Reload requested from client PID 1532 ('systemctl') (unit ensure-sysext.service)... Jul 1 08:42:05.433951 systemd[1]: Reloading... Jul 1 08:42:05.437504 systemd-tmpfiles[1533]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 1 08:42:05.437532 systemd-tmpfiles[1533]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 1 08:42:05.437716 systemd-tmpfiles[1533]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 1 08:42:05.437924 systemd-tmpfiles[1533]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 1 08:42:05.438566 systemd-tmpfiles[1533]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 1 08:42:05.438792 systemd-tmpfiles[1533]: ACLs are not supported, ignoring. Jul 1 08:42:05.438836 systemd-tmpfiles[1533]: ACLs are not supported, ignoring. Jul 1 08:42:05.441557 systemd-tmpfiles[1533]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 08:42:05.441565 systemd-tmpfiles[1533]: Skipping /boot Jul 1 08:42:05.445211 systemd-tmpfiles[1533]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 08:42:05.445221 systemd-tmpfiles[1533]: Skipping /boot Jul 1 08:42:05.477773 zram_generator::config[1560]: No configuration found. Jul 1 08:42:05.478889 systemd-networkd[1349]: eth0: Gained IPv6LL Jul 1 08:42:05.555579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:42:05.634346 systemd[1]: Reloading finished in 200 ms. Jul 1 08:42:05.658156 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 1 08:42:05.769852 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:42:05.770006 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:42:05.772652 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:42:05.783302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:42:05.787506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:42:05.788629 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:42:05.788862 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:42:05.789004 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:42:05.790104 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:42:05.792015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:42:05.792230 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:42:05.795202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:42:05.800918 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:42:05.805289 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:42:05.805419 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:42:05.812487 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:42:05.813239 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 1 08:42:05.816545 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 1 08:42:05.818637 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:42:05.822396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:42:05.826961 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 08:42:05.831820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:42:05.838824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:42:05.840641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:42:05.840678 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:42:05.841568 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 1 08:42:05.848550 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 08:42:05.851832 systemd[1]: Reached target time-set.target - System Time Set. Jul 1 08:42:05.854721 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 1 08:42:05.857663 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:42:05.858454 systemd[1]: Finished ensure-sysext.service. Jul 1 08:42:05.859503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:42:05.859686 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:42:05.864020 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 08:42:05.864234 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 08:42:05.868122 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:42:05.868275 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:42:05.870967 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:42:05.871086 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:42:05.876244 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 08:42:05.876434 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 08:42:05.880876 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 1 08:42:06.024292 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 1 08:42:06.060388 systemd-resolved[1644]: Positive Trust Anchors: Jul 1 08:42:06.060399 systemd-resolved[1644]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 08:42:06.060430 systemd-resolved[1644]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 1 08:42:06.063365 systemd-resolved[1644]: Using system hostname 'ci-9999.9.9-s-875ad0e937'. Jul 1 08:42:06.064326 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 08:42:06.066838 systemd[1]: Reached target network.target - Network. Jul 1 08:42:06.068816 systemd[1]: Reached target network-online.target - Network is Online. Jul 1 08:42:06.069836 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:42:06.411840 augenrules[1667]: No rules Jul 1 08:42:06.412610 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 08:42:06.412845 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 1 08:42:06.812666 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 1 08:42:06.814151 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 1 08:42:10.680404 ldconfig[1306]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 1 08:42:10.707558 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 1 08:42:10.712848 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 1 08:42:10.733911 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 1 08:42:10.735271 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 08:42:10.736561 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 1 08:42:10.738861 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 1 08:42:10.740121 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 1 08:42:10.741395 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 1 08:42:10.743956 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 1 08:42:10.747831 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 1 08:42:10.748953 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 1 08:42:10.748986 systemd[1]: Reached target paths.target - Path Units. Jul 1 08:42:10.750789 systemd[1]: Reached target timers.target - Timer Units. Jul 1 08:42:10.752449 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 1 08:42:10.755638 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 1 08:42:10.758201 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 1 08:42:10.760950 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 1 08:42:10.762573 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 1 08:42:10.775132 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 1 08:42:10.779002 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 1 08:42:10.780557 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 1 08:42:10.782298 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 08:42:10.784817 systemd[1]: Reached target basic.target - Basic System. Jul 1 08:42:10.786844 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 1 08:42:10.786866 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 1 08:42:10.788602 systemd[1]: Starting chronyd.service - NTP client/server... Jul 1 08:42:10.797427 systemd[1]: Starting containerd.service - containerd container runtime... Jul 1 08:42:10.802461 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 1 08:42:10.807692 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 1 08:42:10.813865 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 1 08:42:10.816880 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 1 08:42:10.820908 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 1 08:42:10.823846 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 1 08:42:10.824580 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 1 08:42:10.826178 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jul 1 08:42:10.830686 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 1 08:42:10.833884 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 1 08:42:10.834811 jq[1684]: false Jul 1 08:42:10.836551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:42:10.841951 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 1 08:42:10.847568 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 1 08:42:10.851567 KVP[1690]: KVP starting; pid is:1690 Jul 1 08:42:10.851823 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 1 08:42:10.854465 (chronyd)[1679]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 1 08:42:10.859695 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 1 08:42:10.861410 KVP[1690]: KVP LIC Version: 3.1 Jul 1 08:42:10.862102 kernel: hv_utils: KVP IC version 4.0 Jul 1 08:42:10.863145 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 1 08:42:10.868907 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 1 08:42:10.871039 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 1 08:42:10.871410 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 1 08:42:10.872718 chronyd[1702]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 1 08:42:10.874688 systemd[1]: Starting update-engine.service - Update Engine... Jul 1 08:42:10.877923 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 1 08:42:10.882397 chronyd[1702]: Timezone right/UTC failed leap second check, ignoring Jul 1 08:42:10.882703 chronyd[1702]: Loaded seccomp filter (level 2) Jul 1 08:42:10.884166 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 1 08:42:10.884609 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 1 08:42:10.888132 systemd[1]: Started chronyd.service - NTP client/server. Jul 1 08:42:10.890286 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 1 08:42:10.890462 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 1 08:42:10.897975 systemd[1]: motdgen.service: Deactivated successfully. Jul 1 08:42:10.898807 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 1 08:42:10.915590 jq[1706]: true Jul 1 08:42:10.924085 (ntainerd)[1721]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 1 08:42:10.938081 jq[1728]: true Jul 1 08:42:10.953690 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 1 08:42:11.119208 extend-filesystems[1685]: Found /dev/nvme0n1p6 Jul 1 08:42:11.248277 extend-filesystems[1685]: Found /dev/nvme0n1p9 Jul 1 08:42:11.252732 google_oslogin_nss_cache[1686]: oslogin_cache_refresh[1686]: Refreshing passwd entry cache Jul 1 08:42:11.252684 oslogin_cache_refresh[1686]: Refreshing passwd entry cache Jul 1 08:42:11.263456 google_oslogin_nss_cache[1686]: oslogin_cache_refresh[1686]: Failure getting users, quitting Jul 1 08:42:11.263456 google_oslogin_nss_cache[1686]: oslogin_cache_refresh[1686]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 1 08:42:11.263456 google_oslogin_nss_cache[1686]: oslogin_cache_refresh[1686]: Refreshing group entry cache Jul 1 08:42:11.263046 oslogin_cache_refresh[1686]: Failure getting users, quitting Jul 1 08:42:11.263063 oslogin_cache_refresh[1686]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 1 08:42:11.263101 oslogin_cache_refresh[1686]: Refreshing group entry cache Jul 1 08:42:11.278957 google_oslogin_nss_cache[1686]: oslogin_cache_refresh[1686]: Failure getting groups, quitting Jul 1 08:42:11.279011 oslogin_cache_refresh[1686]: Failure getting groups, quitting Jul 1 08:42:11.279022 oslogin_cache_refresh[1686]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 1 08:42:11.279102 google_oslogin_nss_cache[1686]: oslogin_cache_refresh[1686]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 1 08:42:11.279899 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 1 08:42:11.280079 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 1 08:42:11.287412 extend-filesystems[1685]: Checking size of /dev/nvme0n1p9 Jul 1 08:42:11.324149 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 1 08:42:11.505624 dbus-daemon[1682]: [system] SELinux support is enabled Jul 1 08:42:11.801127 tar[1714]: linux-amd64/LICENSE Jul 1 08:42:11.365606 systemd-logind[1700]: New seat seat0. Jul 1 08:42:11.801397 update_engine[1703]: I20250701 08:42:11.324226 1703 main.cc:92] Flatcar Update Engine starting Jul 1 08:42:11.801397 update_engine[1703]: I20250701 08:42:11.511986 1703 update_check_scheduler.cc:74] Next update check in 10m56s Jul 1 08:42:11.505773 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 1 08:42:11.801709 tar[1714]: linux-amd64/helm Jul 1 08:42:11.510169 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 1 08:42:11.510194 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 1 08:42:11.512786 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 1 08:42:11.512805 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 1 08:42:11.515980 systemd[1]: Started update-engine.service - Update Engine. Jul 1 08:42:11.519691 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 1 08:42:11.802103 systemd-logind[1700]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 1 08:42:11.802249 systemd[1]: Started systemd-logind.service - User Login Management. Jul 1 08:42:11.825200 extend-filesystems[1685]: Old size kept for /dev/nvme0n1p9 Jul 1 08:42:11.832628 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 1 08:42:11.832850 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 1 08:42:11.947827 coreos-metadata[1681]: Jul 01 08:42:11.947 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 1 08:42:12.101069 coreos-metadata[1681]: Jul 01 08:42:11.950 INFO Fetch successful Jul 1 08:42:12.101069 coreos-metadata[1681]: Jul 01 08:42:11.950 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 1 08:42:12.101069 coreos-metadata[1681]: Jul 01 08:42:11.953 INFO Fetch successful Jul 1 08:42:12.101069 coreos-metadata[1681]: Jul 01 08:42:11.954 INFO Fetching http://168.63.129.16/machine/44ca9714-f2ac-4f81-9f03-95eca01984c5/8f9ddc90%2D3d4f%2D4be0%2D87d4%2De45093b120cb.%5Fci%2D9999.9.9%2Ds%2D875ad0e937?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 1 08:42:12.101069 coreos-metadata[1681]: Jul 01 08:42:11.955 INFO Fetch successful Jul 1 08:42:12.101069 coreos-metadata[1681]: Jul 01 08:42:11.955 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 1 08:42:12.101069 coreos-metadata[1681]: Jul 01 08:42:11.964 INFO Fetch successful Jul 1 08:42:11.985931 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 1 08:42:11.987331 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 1 08:42:12.103662 sshd_keygen[1729]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 1 08:42:12.137644 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 1 08:42:12.142958 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 1 08:42:12.146969 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 1 08:42:12.181189 systemd[1]: issuegen.service: Deactivated successfully. Jul 1 08:42:12.181345 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 1 08:42:12.188078 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 1 08:42:12.198380 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 1 08:42:12.376082 tar[1714]: linux-amd64/README.md Jul 1 08:42:12.385808 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 1 08:42:12.418352 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 1 08:42:12.420296 locksmithd[1769]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 1 08:42:12.422495 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 1 08:42:12.430973 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 1 08:42:12.432945 systemd[1]: Reached target getty.target - Login Prompts. Jul 1 08:42:12.435698 bash[1746]: Updated "/home/core/.ssh/authorized_keys" Jul 1 08:42:12.437083 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 1 08:42:12.439624 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 1 08:42:12.608851 containerd[1721]: time="2025-07-01T08:42:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 1 08:42:12.609767 containerd[1721]: time="2025-07-01T08:42:12.609390756Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 1 08:42:12.619325 containerd[1721]: time="2025-07-01T08:42:12.619286574Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.988µs" Jul 1 08:42:12.619325 containerd[1721]: time="2025-07-01T08:42:12.619323958Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 1 08:42:12.619410 containerd[1721]: time="2025-07-01T08:42:12.619347974Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 1 08:42:12.619478 containerd[1721]: time="2025-07-01T08:42:12.619464753Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 1 08:42:12.619497 containerd[1721]: time="2025-07-01T08:42:12.619483987Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 1 08:42:12.619520 containerd[1721]: time="2025-07-01T08:42:12.619510866Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 1 08:42:12.619572 containerd[1721]: time="2025-07-01T08:42:12.619561334Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 1 08:42:12.619592 containerd[1721]: time="2025-07-01T08:42:12.619576966Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 1 08:42:12.619866 containerd[1721]: time="2025-07-01T08:42:12.619843425Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 1 08:42:12.619901 containerd[1721]: time="2025-07-01T08:42:12.619866358Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 1 08:42:12.619901 containerd[1721]: time="2025-07-01T08:42:12.619883087Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 1 08:42:12.619901 containerd[1721]: time="2025-07-01T08:42:12.619894702Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 1 08:42:12.620773 containerd[1721]: time="2025-07-01T08:42:12.619969562Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 1 08:42:12.620773 containerd[1721]: time="2025-07-01T08:42:12.620124527Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 1 08:42:12.620773 containerd[1721]: time="2025-07-01T08:42:12.620152507Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 1 08:42:12.620773 containerd[1721]: time="2025-07-01T08:42:12.620162706Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 1 08:42:12.620773 containerd[1721]: time="2025-07-01T08:42:12.620189870Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 1 08:42:12.620773 containerd[1721]: time="2025-07-01T08:42:12.620574228Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 1 08:42:12.620773 containerd[1721]: time="2025-07-01T08:42:12.620633987Z" level=info msg="metadata content store policy set" policy=shared Jul 1 08:42:12.634004 containerd[1721]: time="2025-07-01T08:42:12.633944072Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 1 08:42:12.634106 containerd[1721]: time="2025-07-01T08:42:12.634093896Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 1 08:42:12.634140 containerd[1721]: time="2025-07-01T08:42:12.634134187Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 1 08:42:12.634168 containerd[1721]: time="2025-07-01T08:42:12.634161472Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 1 08:42:12.634203 containerd[1721]: time="2025-07-01T08:42:12.634197610Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 1 08:42:12.634225 containerd[1721]: time="2025-07-01T08:42:12.634221425Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 1 08:42:12.634262 containerd[1721]: time="2025-07-01T08:42:12.634256671Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 1 08:42:12.634291 containerd[1721]: time="2025-07-01T08:42:12.634286257Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 1 08:42:12.634317 containerd[1721]: time="2025-07-01T08:42:12.634312556Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 1 08:42:12.634349 containerd[1721]: time="2025-07-01T08:42:12.634344242Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 1 08:42:12.634373 containerd[1721]: time="2025-07-01T08:42:12.634368885Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 1 08:42:12.634397 containerd[1721]: time="2025-07-01T08:42:12.634393960Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 1 08:42:12.634500 containerd[1721]: time="2025-07-01T08:42:12.634492671Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 1 08:42:12.634539 containerd[1721]: time="2025-07-01T08:42:12.634532858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 1 08:42:12.634579 containerd[1721]: time="2025-07-01T08:42:12.634573364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 1 08:42:12.634611 containerd[1721]: time="2025-07-01T08:42:12.634604975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 1 08:42:12.634642 containerd[1721]: time="2025-07-01T08:42:12.634635577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 1 08:42:12.634672 containerd[1721]: time="2025-07-01T08:42:12.634666926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 1 08:42:12.634712 containerd[1721]: time="2025-07-01T08:42:12.634704683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 1 08:42:12.634759 containerd[1721]: time="2025-07-01T08:42:12.634742633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 1 08:42:12.634795 containerd[1721]: time="2025-07-01T08:42:12.634789075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 1 08:42:12.634830 containerd[1721]: time="2025-07-01T08:42:12.634823223Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 1 08:42:12.634865 containerd[1721]: time="2025-07-01T08:42:12.634858818Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 1 08:42:12.634944 containerd[1721]: time="2025-07-01T08:42:12.634936651Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 1 08:42:12.634977 containerd[1721]: time="2025-07-01T08:42:12.634972129Z" level=info msg="Start snapshots syncer" Jul 1 08:42:12.635026 containerd[1721]: time="2025-07-01T08:42:12.635019104Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 1 08:42:12.635294 containerd[1721]: time="2025-07-01T08:42:12.635268331Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 1 08:42:12.635435 containerd[1721]: time="2025-07-01T08:42:12.635425562Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 1 08:42:12.635520 containerd[1721]: time="2025-07-01T08:42:12.635512202Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 1 08:42:12.635621 containerd[1721]: time="2025-07-01T08:42:12.635613504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 1 08:42:12.635661 containerd[1721]: time="2025-07-01T08:42:12.635653843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 1 08:42:12.635769 containerd[1721]: time="2025-07-01T08:42:12.635686142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 1 08:42:12.635769 containerd[1721]: time="2025-07-01T08:42:12.635696950Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 1 08:42:12.635769 containerd[1721]: time="2025-07-01T08:42:12.635708340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 1 08:42:12.635769 containerd[1721]: time="2025-07-01T08:42:12.635721716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 1 08:42:12.635769 containerd[1721]: time="2025-07-01T08:42:12.635732586Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 1 08:42:12.635875 containerd[1721]: time="2025-07-01T08:42:12.635868527Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 1 08:42:12.635906 containerd[1721]: time="2025-07-01T08:42:12.635899931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 1 08:42:12.635933 containerd[1721]: time="2025-07-01T08:42:12.635928565Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 1 08:42:12.635974 containerd[1721]: time="2025-07-01T08:42:12.635967218Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 1 08:42:12.636004 containerd[1721]: time="2025-07-01T08:42:12.635998234Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 1 08:42:12.636028 containerd[1721]: time="2025-07-01T08:42:12.636021879Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 1 08:42:12.636792 containerd[1721]: time="2025-07-01T08:42:12.636050845Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 1 08:42:12.636792 containerd[1721]: time="2025-07-01T08:42:12.636058342Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 1 08:42:12.636792 containerd[1721]: time="2025-07-01T08:42:12.636067157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 1 08:42:12.636792 containerd[1721]: time="2025-07-01T08:42:12.636076367Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 1 08:42:12.636792 containerd[1721]: time="2025-07-01T08:42:12.636089971Z" level=info msg="runtime interface created" Jul 1 08:42:12.636792 containerd[1721]: time="2025-07-01T08:42:12.636094291Z" level=info msg="created NRI interface" Jul 1 08:42:12.636792 containerd[1721]: time="2025-07-01T08:42:12.636102363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 1 08:42:12.636792 containerd[1721]: time="2025-07-01T08:42:12.636118880Z" level=info msg="Connect containerd service" Jul 1 08:42:12.636792 containerd[1721]: time="2025-07-01T08:42:12.636145142Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 1 08:42:12.636792 containerd[1721]: time="2025-07-01T08:42:12.636707489Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 1 08:42:12.675711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:42:12.685405 (kubelet)[1836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:42:12.864775 containerd[1721]: time="2025-07-01T08:42:12.864721454Z" level=info msg="Start subscribing containerd event" Jul 1 08:42:12.865114 containerd[1721]: time="2025-07-01T08:42:12.865085571Z" level=info msg="Start recovering state" Jul 1 08:42:12.865231 containerd[1721]: time="2025-07-01T08:42:12.865221370Z" level=info msg="Start event monitor" Jul 1 08:42:12.865277 containerd[1721]: time="2025-07-01T08:42:12.865270026Z" level=info msg="Start cni network conf syncer for default" Jul 1 08:42:12.865316 containerd[1721]: time="2025-07-01T08:42:12.865309734Z" level=info msg="Start streaming server" Jul 1 08:42:12.865422 containerd[1721]: time="2025-07-01T08:42:12.865348032Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 1 08:42:12.865422 containerd[1721]: time="2025-07-01T08:42:12.865355081Z" level=info msg="runtime interface starting up..." Jul 1 08:42:12.865422 containerd[1721]: time="2025-07-01T08:42:12.865360496Z" level=info msg="starting plugins..." Jul 1 08:42:12.865422 containerd[1721]: time="2025-07-01T08:42:12.865371507Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 1 08:42:12.865510 containerd[1721]: time="2025-07-01T08:42:12.865053603Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 1 08:42:12.865574 containerd[1721]: time="2025-07-01T08:42:12.865563381Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 1 08:42:12.866644 containerd[1721]: time="2025-07-01T08:42:12.866091276Z" level=info msg="containerd successfully booted in 0.258576s" Jul 1 08:42:12.866185 systemd[1]: Started containerd.service - containerd container runtime. Jul 1 08:42:12.868091 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 1 08:42:12.869576 systemd[1]: Startup finished in 2.917s (kernel) + 32.820s (initrd) + 18.109s (userspace) = 53.847s. Jul 1 08:42:12.993559 login[1820]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 1 08:42:12.995347 login[1819]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 1 08:42:13.002971 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 1 08:42:13.005356 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 1 08:42:13.016688 systemd-logind[1700]: New session 2 of user core. Jul 1 08:42:13.022305 systemd-logind[1700]: New session 1 of user core. Jul 1 08:42:13.029146 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 1 08:42:13.031579 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 1 08:42:13.044100 (systemd)[1860]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 1 08:42:13.048029 systemd-logind[1700]: New session c1 of user core. Jul 1 08:42:13.124766 waagent[1810]: 2025-07-01T08:42:13.123200Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 1 08:42:13.124766 waagent[1810]: 2025-07-01T08:42:13.123924Z INFO Daemon Daemon OS: flatcar 9999.9.9 Jul 1 08:42:13.128296 waagent[1810]: 2025-07-01T08:42:13.126854Z INFO Daemon Daemon Python: 3.11.12 Jul 1 08:42:13.129771 waagent[1810]: 2025-07-01T08:42:13.128726Z INFO Daemon Daemon Run daemon Jul 1 08:42:13.130372 waagent[1810]: 2025-07-01T08:42:13.130335Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='9999.9.9' Jul 1 08:42:13.133765 waagent[1810]: 2025-07-01T08:42:13.132859Z INFO Daemon Daemon Using waagent for provisioning Jul 1 08:42:13.135180 waagent[1810]: 2025-07-01T08:42:13.135145Z INFO Daemon Daemon Activate resource disk Jul 1 08:42:13.137813 waagent[1810]: 2025-07-01T08:42:13.136612Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 1 08:42:13.140866 waagent[1810]: 2025-07-01T08:42:13.140831Z INFO Daemon Daemon Found device: None Jul 1 08:42:13.142346 waagent[1810]: 2025-07-01T08:42:13.142297Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 1 08:42:13.144663 waagent[1810]: 2025-07-01T08:42:13.144564Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 1 08:42:13.147937 waagent[1810]: 2025-07-01T08:42:13.147807Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 1 08:42:13.149774 waagent[1810]: 2025-07-01T08:42:13.149689Z INFO Daemon Daemon Running default provisioning handler Jul 1 08:42:13.159842 waagent[1810]: 2025-07-01T08:42:13.159795Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 1 08:42:13.164177 waagent[1810]: 2025-07-01T08:42:13.164145Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 1 08:42:13.166910 waagent[1810]: 2025-07-01T08:42:13.164419Z INFO Daemon Daemon cloud-init is enabled: False Jul 1 08:42:13.168365 waagent[1810]: 2025-07-01T08:42:13.168335Z INFO Daemon Daemon Copying ovf-env.xml Jul 1 08:42:13.213312 waagent[1810]: 2025-07-01T08:42:13.212860Z INFO Daemon Daemon Successfully mounted dvd Jul 1 08:42:13.228570 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 1 08:42:13.234002 waagent[1810]: 2025-07-01T08:42:13.233966Z INFO Daemon Daemon Detect protocol endpoint Jul 1 08:42:13.234594 waagent[1810]: 2025-07-01T08:42:13.234565Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 1 08:42:13.235113 waagent[1810]: 2025-07-01T08:42:13.235092Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 1 08:42:13.235321 waagent[1810]: 2025-07-01T08:42:13.235308Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 1 08:42:13.235669 waagent[1810]: 2025-07-01T08:42:13.235648Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 1 08:42:13.236815 waagent[1810]: 2025-07-01T08:42:13.236261Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 1 08:42:13.250249 systemd[1860]: Queued start job for default target default.target. Jul 1 08:42:13.255520 systemd[1860]: Created slice app.slice - User Application Slice. Jul 1 08:42:13.255614 systemd[1860]: Reached target paths.target - Paths. Jul 1 08:42:13.255644 systemd[1860]: Reached target timers.target - Timers. Jul 1 08:42:13.257292 waagent[1810]: 2025-07-01T08:42:13.257259Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 1 08:42:13.257406 systemd[1860]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 1 08:42:13.259803 waagent[1810]: 2025-07-01T08:42:13.259775Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 1 08:42:13.262786 waagent[1810]: 2025-07-01T08:42:13.262324Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 1 08:42:13.267400 systemd[1860]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 1 08:42:13.267478 systemd[1860]: Reached target sockets.target - Sockets. Jul 1 08:42:13.267510 systemd[1860]: Reached target basic.target - Basic System. Jul 1 08:42:13.267567 systemd[1860]: Reached target default.target - Main User Target. Jul 1 08:42:13.267585 systemd[1860]: Startup finished in 211ms. Jul 1 08:42:13.267656 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 1 08:42:13.273879 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 1 08:42:13.274848 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 1 08:42:13.382039 waagent[1810]: 2025-07-01T08:42:13.382001Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 1 08:42:13.382393 waagent[1810]: 2025-07-01T08:42:13.382364Z INFO Daemon Daemon Forcing an update of the goal state. Jul 1 08:42:13.386255 waagent[1810]: 2025-07-01T08:42:13.386034Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 1 08:42:13.415361 kubelet[1836]: E0701 08:42:13.415325 1836 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:42:13.416835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:42:13.416945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:42:13.417201 systemd[1]: kubelet.service: Consumed 850ms CPU time, 263.8M memory peak. Jul 1 08:42:13.423963 waagent[1810]: 2025-07-01T08:42:13.423937Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 1 08:42:13.425216 waagent[1810]: 2025-07-01T08:42:13.425186Z INFO Daemon Jul 1 08:42:13.425872 waagent[1810]: 2025-07-01T08:42:13.425811Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 660d20ad-8c80-4c6b-8ef8-8edd8c38abe3 eTag: 4639330983057455972 source: Fabric] Jul 1 08:42:13.427912 waagent[1810]: 2025-07-01T08:42:13.427887Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 1 08:42:13.429195 waagent[1810]: 2025-07-01T08:42:13.429170Z INFO Daemon Jul 1 08:42:13.429830 waagent[1810]: 2025-07-01T08:42:13.429403Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 1 08:42:13.437369 waagent[1810]: 2025-07-01T08:42:13.437345Z INFO Daemon Daemon Downloading artifacts profile blob Jul 1 08:42:13.496565 waagent[1810]: 2025-07-01T08:42:13.496517Z INFO Daemon Downloaded certificate {'thumbprint': 'B1EB14F48DB9D7B9403F99A96326555C348900E4', 'hasPrivateKey': True} Jul 1 08:42:13.498843 waagent[1810]: 2025-07-01T08:42:13.498808Z INFO Daemon Fetch goal state completed Jul 1 08:42:13.509173 waagent[1810]: 2025-07-01T08:42:13.509108Z INFO Daemon Daemon Starting provisioning Jul 1 08:42:13.509543 waagent[1810]: 2025-07-01T08:42:13.509516Z INFO Daemon Daemon Handle ovf-env.xml. Jul 1 08:42:13.510555 waagent[1810]: 2025-07-01T08:42:13.510249Z INFO Daemon Daemon Set hostname [ci-9999.9.9-s-875ad0e937] Jul 1 08:42:13.518454 waagent[1810]: 2025-07-01T08:42:13.518417Z INFO Daemon Daemon Publish hostname [ci-9999.9.9-s-875ad0e937] Jul 1 08:42:13.519242 waagent[1810]: 2025-07-01T08:42:13.519105Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 1 08:42:13.519412 waagent[1810]: 2025-07-01T08:42:13.519389Z INFO Daemon Daemon Primary interface is [eth0] Jul 1 08:42:13.526659 systemd-networkd[1349]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:42:13.526666 systemd-networkd[1349]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 08:42:13.526686 systemd-networkd[1349]: eth0: DHCP lease lost Jul 1 08:42:13.527437 waagent[1810]: 2025-07-01T08:42:13.527396Z INFO Daemon Daemon Create user account if not exists Jul 1 08:42:13.528312 waagent[1810]: 2025-07-01T08:42:13.527973Z INFO Daemon Daemon User core already exists, skip useradd Jul 1 08:42:13.528312 waagent[1810]: 2025-07-01T08:42:13.528119Z INFO Daemon Daemon Configure sudoer Jul 1 08:42:13.531701 waagent[1810]: 2025-07-01T08:42:13.531653Z INFO Daemon Daemon Configure sshd Jul 1 08:42:13.535101 waagent[1810]: 2025-07-01T08:42:13.535065Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 1 08:42:13.536157 waagent[1810]: 2025-07-01T08:42:13.535406Z INFO Daemon Daemon Deploy ssh public key. Jul 1 08:42:13.543811 systemd-networkd[1349]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 1 08:42:14.602192 waagent[1810]: 2025-07-01T08:42:14.602121Z INFO Daemon Daemon Provisioning complete Jul 1 08:42:14.612052 waagent[1810]: 2025-07-01T08:42:14.612022Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 1 08:42:14.612700 waagent[1810]: 2025-07-01T08:42:14.612378Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 1 08:42:14.612700 waagent[1810]: 2025-07-01T08:42:14.612548Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 1 08:42:14.702912 waagent[1913]: 2025-07-01T08:42:14.702858Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 1 08:42:14.703125 waagent[1913]: 2025-07-01T08:42:14.702944Z INFO ExtHandler ExtHandler OS: flatcar 9999.9.9 Jul 1 08:42:14.703125 waagent[1913]: 2025-07-01T08:42:14.702981Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 1 08:42:14.703125 waagent[1913]: 2025-07-01T08:42:14.703015Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jul 1 08:42:14.719275 waagent[1913]: 2025-07-01T08:42:14.719233Z INFO ExtHandler ExtHandler Distro: flatcar-9999.9.9; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 1 08:42:14.719387 waagent[1913]: 2025-07-01T08:42:14.719364Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 1 08:42:14.719430 waagent[1913]: 2025-07-01T08:42:14.719410Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 1 08:42:14.727303 waagent[1913]: 2025-07-01T08:42:14.727256Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 1 08:42:14.732501 waagent[1913]: 2025-07-01T08:42:14.732468Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 1 08:42:14.732824 waagent[1913]: 2025-07-01T08:42:14.732794Z INFO ExtHandler Jul 1 08:42:14.732877 waagent[1913]: 2025-07-01T08:42:14.732845Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8c2cd269-697e-43e7-a38c-d8f2d7c18308 eTag: 4639330983057455972 source: Fabric] Jul 1 08:42:14.733061 waagent[1913]: 2025-07-01T08:42:14.733040Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 1 08:42:14.733369 waagent[1913]: 2025-07-01T08:42:14.733346Z INFO ExtHandler Jul 1 08:42:14.733401 waagent[1913]: 2025-07-01T08:42:14.733385Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 1 08:42:14.738899 waagent[1913]: 2025-07-01T08:42:14.738874Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 1 08:42:14.804304 waagent[1913]: 2025-07-01T08:42:14.804261Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B1EB14F48DB9D7B9403F99A96326555C348900E4', 'hasPrivateKey': True} Jul 1 08:42:14.804595 waagent[1913]: 2025-07-01T08:42:14.804570Z INFO ExtHandler Fetch goal state completed Jul 1 08:42:14.816269 waagent[1913]: 2025-07-01T08:42:14.816229Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 1 08:42:14.819604 waagent[1913]: 2025-07-01T08:42:14.819555Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1913 Jul 1 08:42:14.819684 waagent[1913]: 2025-07-01T08:42:14.819658Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 1 08:42:14.819931 waagent[1913]: 2025-07-01T08:42:14.819910Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 1 08:42:14.820745 waagent[1913]: 2025-07-01T08:42:14.820715Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '9999.9.9', '', 'Flatcar Container Linux by Kinvolk'] Jul 1 08:42:14.821020 waagent[1913]: 2025-07-01T08:42:14.820996Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '9999.9.9', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 1 08:42:14.821115 waagent[1913]: 2025-07-01T08:42:14.821097Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 1 08:42:14.821443 waagent[1913]: 2025-07-01T08:42:14.821422Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 1 08:42:14.832508 waagent[1913]: 2025-07-01T08:42:14.832487Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 1 08:42:14.832613 waagent[1913]: 2025-07-01T08:42:14.832594Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 1 08:42:14.837550 waagent[1913]: 2025-07-01T08:42:14.837415Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 1 08:42:14.841735 systemd[1]: Reload requested from client PID 1928 ('systemctl') (unit waagent.service)... Jul 1 08:42:14.841744 systemd[1]: Reloading... Jul 1 08:42:14.904770 zram_generator::config[1966]: No configuration found. Jul 1 08:42:14.975886 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:42:15.060921 systemd[1]: Reloading finished in 218 ms. Jul 1 08:42:15.072533 waagent[1913]: 2025-07-01T08:42:15.071912Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 1 08:42:15.072533 waagent[1913]: 2025-07-01T08:42:15.072025Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 1 08:42:15.274170 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#188 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jul 1 08:42:15.455964 waagent[1913]: 2025-07-01T08:42:15.455900Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 1 08:42:15.456240 waagent[1913]: 2025-07-01T08:42:15.456210Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 1 08:42:15.456849 waagent[1913]: 2025-07-01T08:42:15.456788Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 1 08:42:15.456906 waagent[1913]: 2025-07-01T08:42:15.456885Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 1 08:42:15.456986 waagent[1913]: 2025-07-01T08:42:15.456970Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 1 08:42:15.457162 waagent[1913]: 2025-07-01T08:42:15.457144Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 1 08:42:15.457615 waagent[1913]: 2025-07-01T08:42:15.457595Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 1 08:42:15.457720 waagent[1913]: 2025-07-01T08:42:15.457677Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 1 08:42:15.457798 waagent[1913]: 2025-07-01T08:42:15.457762Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 1 08:42:15.457907 waagent[1913]: 2025-07-01T08:42:15.457889Z INFO EnvHandler ExtHandler Configure routes Jul 1 08:42:15.458032 waagent[1913]: 2025-07-01T08:42:15.458018Z INFO EnvHandler ExtHandler Gateway:None Jul 1 08:42:15.458163 waagent[1913]: 2025-07-01T08:42:15.458149Z INFO EnvHandler ExtHandler Routes:None Jul 1 08:42:15.458717 waagent[1913]: 2025-07-01T08:42:15.458679Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 1 08:42:15.458717 waagent[1913]: 2025-07-01T08:42:15.458729Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 1 08:42:15.459326 waagent[1913]: 2025-07-01T08:42:15.459302Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 1 08:42:15.459393 waagent[1913]: 2025-07-01T08:42:15.459332Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 1 08:42:15.459393 waagent[1913]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 1 08:42:15.459393 waagent[1913]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 1 08:42:15.459393 waagent[1913]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 1 08:42:15.459393 waagent[1913]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 1 08:42:15.459393 waagent[1913]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 1 08:42:15.459393 waagent[1913]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 1 08:42:15.459696 waagent[1913]: 2025-07-01T08:42:15.459647Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 1 08:42:15.459872 waagent[1913]: 2025-07-01T08:42:15.459835Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 1 08:42:15.473083 waagent[1913]: 2025-07-01T08:42:15.472731Z INFO MonitorHandler ExtHandler Network interfaces: Jul 1 08:42:15.473083 waagent[1913]: Executing ['ip', '-a', '-o', 'link']: Jul 1 08:42:15.473083 waagent[1913]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 1 08:42:15.473083 waagent[1913]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:87:83:98 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jul 1 08:42:15.473083 waagent[1913]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:87:83:98 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jul 1 08:42:15.473083 waagent[1913]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 1 08:42:15.473083 waagent[1913]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 1 08:42:15.473083 waagent[1913]: 2: eth0 inet 10.200.8.13/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 1 08:42:15.473083 waagent[1913]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 1 08:42:15.473083 waagent[1913]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 1 08:42:15.473083 waagent[1913]: 2: eth0 inet6 fe80::7e1e:52ff:fe87:8398/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 1 08:42:15.473083 waagent[1913]: 3: enP30832s1 inet6 fe80::7e1e:52ff:fe87:8398/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 1 08:42:15.473984 waagent[1913]: 2025-07-01T08:42:15.473955Z INFO ExtHandler ExtHandler Jul 1 08:42:15.474054 waagent[1913]: 2025-07-01T08:42:15.474018Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 507ed921-ccc2-4fdc-859a-7f6539b43702 correlation 3d0c94c3-eac3-48cb-8bf8-ba1a0810179f created: 2025-07-01T08:40:53.320976Z] Jul 1 08:42:15.474313 waagent[1913]: 2025-07-01T08:42:15.474293Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 1 08:42:15.474894 waagent[1913]: 2025-07-01T08:42:15.474856Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 1 08:42:15.505916 waagent[1913]: 2025-07-01T08:42:15.505870Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 1 08:42:15.505916 waagent[1913]: Try `iptables -h' or 'iptables --help' for more information.) Jul 1 08:42:15.506262 waagent[1913]: 2025-07-01T08:42:15.506237Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 1 08:42:15.506262 waagent[1913]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 1 08:42:15.506262 waagent[1913]: pkts bytes target prot opt in out source destination Jul 1 08:42:15.506262 waagent[1913]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 1 08:42:15.506262 waagent[1913]: pkts bytes target prot opt in out source destination Jul 1 08:42:15.506262 waagent[1913]: Chain OUTPUT (policy ACCEPT 4 packets, 348 bytes) Jul 1 08:42:15.506262 waagent[1913]: pkts bytes target prot opt in out source destination Jul 1 08:42:15.506262 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 1 08:42:15.506262 waagent[1913]: 2 104 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 1 08:42:15.506262 waagent[1913]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 1 08:42:15.506499 waagent[1913]: 2025-07-01T08:42:15.506355Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2F0B91DB-08D6-4129-B702-18347E2D4114;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 1 08:42:15.508920 waagent[1913]: 2025-07-01T08:42:15.508873Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 1 08:42:15.508920 waagent[1913]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 1 08:42:15.508920 waagent[1913]: pkts bytes target prot opt in out source destination Jul 1 08:42:15.508920 waagent[1913]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 1 08:42:15.508920 waagent[1913]: pkts bytes target prot opt in out source destination Jul 1 08:42:15.508920 waagent[1913]: Chain OUTPUT (policy ACCEPT 4 packets, 348 bytes) Jul 1 08:42:15.508920 waagent[1913]: pkts bytes target prot opt in out source destination Jul 1 08:42:15.508920 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 1 08:42:15.508920 waagent[1913]: 5 639 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 1 08:42:15.508920 waagent[1913]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 1 08:42:23.524557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 1 08:42:23.526296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:42:23.974669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:42:23.983011 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:42:24.015182 kubelet[2064]: E0701 08:42:24.015142 2064 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:42:24.017818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:42:24.017933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:42:24.018222 systemd[1]: kubelet.service: Consumed 121ms CPU time, 108.8M memory peak. Jul 1 08:42:34.024620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 1 08:42:34.026328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:42:34.488560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:42:34.491175 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:42:34.521438 kubelet[2080]: E0701 08:42:34.521407 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:42:34.522931 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:42:34.523058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:42:34.523370 systemd[1]: kubelet.service: Consumed 113ms CPU time, 110.1M memory peak. Jul 1 08:42:34.668224 chronyd[1702]: Selected source PHC0 Jul 1 08:42:43.951025 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 1 08:42:43.952076 systemd[1]: Started sshd@0-10.200.8.13:22-10.200.16.10:55490.service - OpenSSH per-connection server daemon (10.200.16.10:55490). Jul 1 08:42:44.524377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 1 08:42:44.525987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:42:44.602646 sshd[2089]: Accepted publickey for core from 10.200.16.10 port 55490 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:42:44.603593 sshd-session[2089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:42:44.607475 systemd-logind[1700]: New session 3 of user core. Jul 1 08:42:44.615871 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 1 08:42:45.084577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:42:45.093970 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:42:45.124560 kubelet[2102]: E0701 08:42:45.124530 2102 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:42:45.125803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:42:45.125929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:42:45.126262 systemd[1]: kubelet.service: Consumed 113ms CPU time, 108.5M memory peak. Jul 1 08:42:45.149851 systemd[1]: Started sshd@1-10.200.8.13:22-10.200.16.10:55496.service - OpenSSH per-connection server daemon (10.200.16.10:55496). Jul 1 08:42:45.779219 sshd[2110]: Accepted publickey for core from 10.200.16.10 port 55496 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:42:45.780497 sshd-session[2110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:42:45.784923 systemd-logind[1700]: New session 4 of user core. Jul 1 08:42:45.790925 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 1 08:42:46.223322 sshd[2113]: Connection closed by 10.200.16.10 port 55496 Jul 1 08:42:46.224150 sshd-session[2110]: pam_unix(sshd:session): session closed for user core Jul 1 08:42:46.227368 systemd[1]: sshd@1-10.200.8.13:22-10.200.16.10:55496.service: Deactivated successfully. Jul 1 08:42:46.228858 systemd[1]: session-4.scope: Deactivated successfully. Jul 1 08:42:46.229480 systemd-logind[1700]: Session 4 logged out. Waiting for processes to exit. Jul 1 08:42:46.230495 systemd-logind[1700]: Removed session 4. Jul 1 08:42:46.337640 systemd[1]: Started sshd@2-10.200.8.13:22-10.200.16.10:55512.service - OpenSSH per-connection server daemon (10.200.16.10:55512). Jul 1 08:42:46.964802 sshd[2119]: Accepted publickey for core from 10.200.16.10 port 55512 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:42:46.966028 sshd-session[2119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:42:46.970460 systemd-logind[1700]: New session 5 of user core. Jul 1 08:42:46.976907 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 1 08:42:47.405062 sshd[2122]: Connection closed by 10.200.16.10 port 55512 Jul 1 08:42:47.405674 sshd-session[2119]: pam_unix(sshd:session): session closed for user core Jul 1 08:42:47.408684 systemd[1]: sshd@2-10.200.8.13:22-10.200.16.10:55512.service: Deactivated successfully. Jul 1 08:42:47.410152 systemd[1]: session-5.scope: Deactivated successfully. Jul 1 08:42:47.411744 systemd-logind[1700]: Session 5 logged out. Waiting for processes to exit. Jul 1 08:42:47.412522 systemd-logind[1700]: Removed session 5. Jul 1 08:42:47.515676 systemd[1]: Started sshd@3-10.200.8.13:22-10.200.16.10:55528.service - OpenSSH per-connection server daemon (10.200.16.10:55528). Jul 1 08:42:48.145973 sshd[2128]: Accepted publickey for core from 10.200.16.10 port 55528 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:42:48.147154 sshd-session[2128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:42:48.151649 systemd-logind[1700]: New session 6 of user core. Jul 1 08:42:48.156882 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 1 08:42:48.589770 sshd[2131]: Connection closed by 10.200.16.10 port 55528 Jul 1 08:42:48.590341 sshd-session[2128]: pam_unix(sshd:session): session closed for user core Jul 1 08:42:48.593814 systemd[1]: sshd@3-10.200.8.13:22-10.200.16.10:55528.service: Deactivated successfully. Jul 1 08:42:48.595448 systemd[1]: session-6.scope: Deactivated successfully. Jul 1 08:42:48.596057 systemd-logind[1700]: Session 6 logged out. Waiting for processes to exit. Jul 1 08:42:48.597027 systemd-logind[1700]: Removed session 6. Jul 1 08:42:48.704073 systemd[1]: Started sshd@4-10.200.8.13:22-10.200.16.10:55542.service - OpenSSH per-connection server daemon (10.200.16.10:55542). Jul 1 08:42:49.334075 sshd[2137]: Accepted publickey for core from 10.200.16.10 port 55542 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:42:49.335363 sshd-session[2137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:42:49.339641 systemd-logind[1700]: New session 7 of user core. Jul 1 08:42:49.348900 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 1 08:42:49.716513 sudo[2141]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 1 08:42:49.716708 sudo[2141]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:42:49.726407 sudo[2141]: pam_unix(sudo:session): session closed for user root Jul 1 08:42:49.828688 sshd[2140]: Connection closed by 10.200.16.10 port 55542 Jul 1 08:42:49.829373 sshd-session[2137]: pam_unix(sshd:session): session closed for user core Jul 1 08:42:49.832706 systemd[1]: sshd@4-10.200.8.13:22-10.200.16.10:55542.service: Deactivated successfully. Jul 1 08:42:49.834041 systemd[1]: session-7.scope: Deactivated successfully. Jul 1 08:42:49.835669 systemd-logind[1700]: Session 7 logged out. Waiting for processes to exit. Jul 1 08:42:49.836409 systemd-logind[1700]: Removed session 7. Jul 1 08:42:49.943872 systemd[1]: Started sshd@5-10.200.8.13:22-10.200.16.10:60644.service - OpenSSH per-connection server daemon (10.200.16.10:60644). Jul 1 08:42:50.574127 sshd[2147]: Accepted publickey for core from 10.200.16.10 port 60644 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:42:50.575397 sshd-session[2147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:42:50.579668 systemd-logind[1700]: New session 8 of user core. Jul 1 08:42:50.585929 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 1 08:42:50.917495 sudo[2152]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 1 08:42:50.917694 sudo[2152]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:42:50.923045 sudo[2152]: pam_unix(sudo:session): session closed for user root Jul 1 08:42:50.926521 sudo[2151]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 1 08:42:50.926713 sudo[2151]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:42:50.933387 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 1 08:42:50.963317 augenrules[2174]: No rules Jul 1 08:42:50.964161 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 08:42:50.964340 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 1 08:42:50.965300 sudo[2151]: pam_unix(sudo:session): session closed for user root Jul 1 08:42:51.067506 sshd[2150]: Connection closed by 10.200.16.10 port 60644 Jul 1 08:42:51.067980 sshd-session[2147]: pam_unix(sshd:session): session closed for user core Jul 1 08:42:51.070695 systemd[1]: sshd@5-10.200.8.13:22-10.200.16.10:60644.service: Deactivated successfully. Jul 1 08:42:51.072007 systemd[1]: session-8.scope: Deactivated successfully. Jul 1 08:42:51.073099 systemd-logind[1700]: Session 8 logged out. Waiting for processes to exit. Jul 1 08:42:51.074039 systemd-logind[1700]: Removed session 8. Jul 1 08:42:51.177594 systemd[1]: Started sshd@6-10.200.8.13:22-10.200.16.10:60660.service - OpenSSH per-connection server daemon (10.200.16.10:60660). Jul 1 08:42:51.295077 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 1 08:42:51.808942 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 60660 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:42:51.810161 sshd-session[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:42:51.814560 systemd-logind[1700]: New session 9 of user core. Jul 1 08:42:51.820895 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 1 08:42:52.151982 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 1 08:42:52.152174 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:42:52.561230 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 1 08:42:52.570043 (dockerd)[2205]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 1 08:42:52.897908 dockerd[2205]: time="2025-07-01T08:42:52.897653442Z" level=info msg="Starting up" Jul 1 08:42:52.898822 dockerd[2205]: time="2025-07-01T08:42:52.898659936Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 1 08:42:53.067284 dockerd[2205]: time="2025-07-01T08:42:53.067237494Z" level=info msg="Loading containers: start." Jul 1 08:42:53.082775 kernel: Initializing XFRM netlink socket Jul 1 08:42:53.259071 systemd-networkd[1349]: docker0: Link UP Jul 1 08:42:53.271472 dockerd[2205]: time="2025-07-01T08:42:53.271440372Z" level=info msg="Loading containers: done." Jul 1 08:42:53.298004 dockerd[2205]: time="2025-07-01T08:42:53.297967988Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 1 08:42:53.298121 dockerd[2205]: time="2025-07-01T08:42:53.298037103Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 1 08:42:53.298121 dockerd[2205]: time="2025-07-01T08:42:53.298111882Z" level=info msg="Initializing buildkit" Jul 1 08:42:53.352682 dockerd[2205]: time="2025-07-01T08:42:53.352642243Z" level=info msg="Completed buildkit initialization" Jul 1 08:42:53.358629 dockerd[2205]: time="2025-07-01T08:42:53.358582522Z" level=info msg="Daemon has completed initialization" Jul 1 08:42:53.358911 dockerd[2205]: time="2025-07-01T08:42:53.358646675Z" level=info msg="API listen on /run/docker.sock" Jul 1 08:42:53.358773 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 1 08:42:54.479037 containerd[1721]: time="2025-07-01T08:42:54.478998086Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 1 08:42:55.274289 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 1 08:42:55.275980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:42:55.746810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:42:55.751034 (kubelet)[2414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:42:55.780974 kubelet[2414]: E0701 08:42:55.780935 2414 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:42:55.782271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:42:55.782385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:42:55.782642 systemd[1]: kubelet.service: Consumed 120ms CPU time, 110.1M memory peak. Jul 1 08:42:55.886017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008653484.mount: Deactivated successfully. Jul 1 08:42:56.952112 containerd[1721]: time="2025-07-01T08:42:56.952066213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:42:56.954150 containerd[1721]: time="2025-07-01T08:42:56.954117493Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jul 1 08:42:56.956564 containerd[1721]: time="2025-07-01T08:42:56.956528308Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:42:56.960272 containerd[1721]: time="2025-07-01T08:42:56.960239404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:42:56.961047 containerd[1721]: time="2025-07-01T08:42:56.960785553Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.481748024s" Jul 1 08:42:56.961047 containerd[1721]: time="2025-07-01T08:42:56.960815258Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 1 08:42:56.961360 containerd[1721]: time="2025-07-01T08:42:56.961346089Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 1 08:42:57.023628 update_engine[1703]: I20250701 08:42:57.023581 1703 update_attempter.cc:509] Updating boot flags... Jul 1 08:42:58.303591 containerd[1721]: time="2025-07-01T08:42:58.303548016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:42:58.306248 containerd[1721]: time="2025-07-01T08:42:58.306216030Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jul 1 08:42:58.308802 containerd[1721]: time="2025-07-01T08:42:58.308773866Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:42:58.313981 containerd[1721]: time="2025-07-01T08:42:58.313939508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:42:58.314772 containerd[1721]: time="2025-07-01T08:42:58.314481101Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.353072307s" Jul 1 08:42:58.314772 containerd[1721]: time="2025-07-01T08:42:58.314510809Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 1 08:42:58.314991 containerd[1721]: time="2025-07-01T08:42:58.314966245Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 1 08:42:59.442795 containerd[1721]: time="2025-07-01T08:42:59.442739242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:42:59.445700 containerd[1721]: time="2025-07-01T08:42:59.445666561Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jul 1 08:42:59.448375 containerd[1721]: time="2025-07-01T08:42:59.448336717Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:42:59.451825 containerd[1721]: time="2025-07-01T08:42:59.451788643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:42:59.452689 containerd[1721]: time="2025-07-01T08:42:59.452287187Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.137294526s" Jul 1 08:42:59.452689 containerd[1721]: time="2025-07-01T08:42:59.452315490Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 1 08:42:59.452846 containerd[1721]: time="2025-07-01T08:42:59.452833229Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 1 08:43:00.383548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1152057852.mount: Deactivated successfully. Jul 1 08:43:00.706669 containerd[1721]: time="2025-07-01T08:43:00.706568752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:00.709058 containerd[1721]: time="2025-07-01T08:43:00.709024964Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jul 1 08:43:00.712619 containerd[1721]: time="2025-07-01T08:43:00.712592881Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:00.715974 containerd[1721]: time="2025-07-01T08:43:00.715934394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:00.716205 containerd[1721]: time="2025-07-01T08:43:00.716185276Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.263331014s" Jul 1 08:43:00.716237 containerd[1721]: time="2025-07-01T08:43:00.716214548Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 1 08:43:00.716638 containerd[1721]: time="2025-07-01T08:43:00.716614039Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 1 08:43:01.341561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2230700369.mount: Deactivated successfully. Jul 1 08:43:02.256867 containerd[1721]: time="2025-07-01T08:43:02.256824241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:02.259944 containerd[1721]: time="2025-07-01T08:43:02.259911267Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 1 08:43:02.264427 containerd[1721]: time="2025-07-01T08:43:02.264361028Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:02.269773 containerd[1721]: time="2025-07-01T08:43:02.269583683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:02.270327 containerd[1721]: time="2025-07-01T08:43:02.270305606Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.553668225s" Jul 1 08:43:02.270368 containerd[1721]: time="2025-07-01T08:43:02.270337260Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 1 08:43:02.270948 containerd[1721]: time="2025-07-01T08:43:02.270877039Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 1 08:43:02.846075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121060224.mount: Deactivated successfully. Jul 1 08:43:02.872819 containerd[1721]: time="2025-07-01T08:43:02.872786341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:43:02.874918 containerd[1721]: time="2025-07-01T08:43:02.874888500Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 1 08:43:02.878076 containerd[1721]: time="2025-07-01T08:43:02.878039009Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:43:02.881710 containerd[1721]: time="2025-07-01T08:43:02.881675830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:43:02.882305 containerd[1721]: time="2025-07-01T08:43:02.882054511Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 611.133503ms" Jul 1 08:43:02.882305 containerd[1721]: time="2025-07-01T08:43:02.882081036Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 1 08:43:02.882496 containerd[1721]: time="2025-07-01T08:43:02.882481836Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 1 08:43:03.486579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount418258377.mount: Deactivated successfully. Jul 1 08:43:05.078311 containerd[1721]: time="2025-07-01T08:43:05.078267124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:05.081001 containerd[1721]: time="2025-07-01T08:43:05.080966196Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jul 1 08:43:05.083972 containerd[1721]: time="2025-07-01T08:43:05.083941002Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:05.087813 containerd[1721]: time="2025-07-01T08:43:05.087783266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:05.088566 containerd[1721]: time="2025-07-01T08:43:05.088439613Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.205930838s" Jul 1 08:43:05.088566 containerd[1721]: time="2025-07-01T08:43:05.088470034Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 1 08:43:05.799892 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 1 08:43:05.801653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:43:06.260880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:43:06.267932 (kubelet)[2662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:43:06.310764 kubelet[2662]: E0701 08:43:06.310104 2662 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:43:06.311787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:43:06.311902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:43:06.312169 systemd[1]: kubelet.service: Consumed 129ms CPU time, 108.6M memory peak. Jul 1 08:43:07.078684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:43:07.078871 systemd[1]: kubelet.service: Consumed 129ms CPU time, 108.6M memory peak. Jul 1 08:43:07.080835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:43:07.100741 systemd[1]: Reload requested from client PID 2676 ('systemctl') (unit session-9.scope)... Jul 1 08:43:07.100795 systemd[1]: Reloading... Jul 1 08:43:07.178776 zram_generator::config[2721]: No configuration found. Jul 1 08:43:07.288360 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:43:07.374135 systemd[1]: Reloading finished in 273 ms. Jul 1 08:43:07.411092 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 1 08:43:07.411157 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 1 08:43:07.411374 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:43:07.412966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:43:07.882879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:43:07.889978 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 1 08:43:07.923703 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:43:07.923703 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 1 08:43:07.923703 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:43:07.923703 kubelet[2789]: I0701 08:43:07.922658 2789 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 1 08:43:08.136066 kubelet[2789]: I0701 08:43:08.135985 2789 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 1 08:43:08.136066 kubelet[2789]: I0701 08:43:08.136005 2789 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 1 08:43:08.136477 kubelet[2789]: I0701 08:43:08.136217 2789 server.go:954] "Client rotation is on, will bootstrap in background" Jul 1 08:43:08.167493 kubelet[2789]: E0701 08:43:08.167465 2789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Jul 1 08:43:08.168306 kubelet[2789]: I0701 08:43:08.168216 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 1 08:43:08.176783 kubelet[2789]: I0701 08:43:08.176765 2789 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 1 08:43:08.179381 kubelet[2789]: I0701 08:43:08.179347 2789 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 1 08:43:08.181319 kubelet[2789]: I0701 08:43:08.181284 2789 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 1 08:43:08.181479 kubelet[2789]: I0701 08:43:08.181314 2789 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999.9.9-s-875ad0e937","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 1 08:43:08.181589 kubelet[2789]: I0701 08:43:08.181484 2789 topology_manager.go:138] "Creating topology manager with none policy" Jul 1 08:43:08.181589 kubelet[2789]: I0701 08:43:08.181492 2789 container_manager_linux.go:304] "Creating device plugin manager" Jul 1 08:43:08.181629 kubelet[2789]: I0701 08:43:08.181597 2789 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:43:08.184199 kubelet[2789]: I0701 08:43:08.184187 2789 kubelet.go:446] "Attempting to sync node with API server" Jul 1 08:43:08.184261 kubelet[2789]: I0701 08:43:08.184208 2789 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 1 08:43:08.184261 kubelet[2789]: I0701 08:43:08.184227 2789 kubelet.go:352] "Adding apiserver pod source" Jul 1 08:43:08.184261 kubelet[2789]: I0701 08:43:08.184237 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 1 08:43:08.190652 kubelet[2789]: W0701 08:43:08.190613 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Jul 1 08:43:08.190717 kubelet[2789]: E0701 08:43:08.190655 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Jul 1 08:43:08.190717 kubelet[2789]: W0701 08:43:08.190704 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999.9.9-s-875ad0e937&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Jul 1 08:43:08.190790 kubelet[2789]: E0701 08:43:08.190727 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999.9.9-s-875ad0e937&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Jul 1 08:43:08.190848 kubelet[2789]: I0701 08:43:08.190819 2789 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 1 08:43:08.191242 kubelet[2789]: I0701 08:43:08.191152 2789 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 1 08:43:08.191903 kubelet[2789]: W0701 08:43:08.191885 2789 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 1 08:43:08.193611 kubelet[2789]: I0701 08:43:08.193592 2789 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 1 08:43:08.193676 kubelet[2789]: I0701 08:43:08.193624 2789 server.go:1287] "Started kubelet" Jul 1 08:43:08.193780 kubelet[2789]: I0701 08:43:08.193729 2789 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 1 08:43:08.194782 kubelet[2789]: I0701 08:43:08.194448 2789 server.go:479] "Adding debug handlers to kubelet server" Jul 1 08:43:08.196645 kubelet[2789]: I0701 08:43:08.196406 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 1 08:43:08.197886 kubelet[2789]: I0701 08:43:08.197460 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 1 08:43:08.197886 kubelet[2789]: I0701 08:43:08.197643 2789 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 1 08:43:08.199063 kubelet[2789]: E0701 08:43:08.197798 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-9999.9.9-s-875ad0e937.184e14159f5347bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-9999.9.9-s-875ad0e937,UID:ci-9999.9.9-s-875ad0e937,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-9999.9.9-s-875ad0e937,},FirstTimestamp:2025-07-01 08:43:08.193605563 +0000 UTC m=+0.300809855,LastTimestamp:2025-07-01 08:43:08.193605563 +0000 UTC m=+0.300809855,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999.9.9-s-875ad0e937,}" Jul 1 08:43:08.201313 kubelet[2789]: I0701 08:43:08.201302 2789 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 1 08:43:08.201585 kubelet[2789]: E0701 08:43:08.201571 2789 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-9999.9.9-s-875ad0e937\" not found" Jul 1 08:43:08.201717 kubelet[2789]: I0701 08:43:08.201708 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 1 08:43:08.202815 kubelet[2789]: I0701 08:43:08.202801 2789 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 1 08:43:08.202913 kubelet[2789]: I0701 08:43:08.202907 2789 reconciler.go:26] "Reconciler: start to sync state" Jul 1 08:43:08.203848 kubelet[2789]: W0701 08:43:08.203371 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Jul 1 08:43:08.203848 kubelet[2789]: E0701 08:43:08.203409 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Jul 1 08:43:08.203848 kubelet[2789]: E0701 08:43:08.203457 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999.9.9-s-875ad0e937?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="200ms" Jul 1 08:43:08.203848 kubelet[2789]: E0701 08:43:08.203529 2789 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 1 08:43:08.204244 kubelet[2789]: I0701 08:43:08.204233 2789 factory.go:221] Registration of the systemd container factory successfully Jul 1 08:43:08.204359 kubelet[2789]: I0701 08:43:08.204349 2789 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 1 08:43:08.205926 kubelet[2789]: I0701 08:43:08.205635 2789 factory.go:221] Registration of the containerd container factory successfully Jul 1 08:43:08.216895 kubelet[2789]: I0701 08:43:08.216796 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 1 08:43:08.217861 kubelet[2789]: I0701 08:43:08.217846 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 1 08:43:08.217929 kubelet[2789]: I0701 08:43:08.217924 2789 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 1 08:43:08.217972 kubelet[2789]: I0701 08:43:08.217967 2789 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 1 08:43:08.218007 kubelet[2789]: I0701 08:43:08.218003 2789 kubelet.go:2382] "Starting kubelet main sync loop" Jul 1 08:43:08.218071 kubelet[2789]: E0701 08:43:08.218060 2789 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 1 08:43:08.222256 kubelet[2789]: W0701 08:43:08.222034 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Jul 1 08:43:08.222373 kubelet[2789]: E0701 08:43:08.222357 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Jul 1 08:43:08.223375 kubelet[2789]: I0701 08:43:08.223364 2789 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 1 08:43:08.223446 kubelet[2789]: I0701 08:43:08.223440 2789 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 1 08:43:08.223625 kubelet[2789]: I0701 08:43:08.223485 2789 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:43:08.234575 kubelet[2789]: I0701 08:43:08.234564 2789 policy_none.go:49] "None policy: Start" Jul 1 08:43:08.234637 kubelet[2789]: I0701 08:43:08.234632 2789 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 1 08:43:08.234671 kubelet[2789]: I0701 08:43:08.234668 2789 state_mem.go:35] "Initializing new in-memory state store" Jul 1 08:43:08.241774 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 1 08:43:08.249510 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 1 08:43:08.251902 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 1 08:43:08.261285 kubelet[2789]: I0701 08:43:08.261203 2789 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 1 08:43:08.261340 kubelet[2789]: I0701 08:43:08.261334 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 1 08:43:08.261364 kubelet[2789]: I0701 08:43:08.261343 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 1 08:43:08.261828 kubelet[2789]: I0701 08:43:08.261676 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 1 08:43:08.262471 kubelet[2789]: E0701 08:43:08.262452 2789 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 1 08:43:08.262534 kubelet[2789]: E0701 08:43:08.262498 2789 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-9999.9.9-s-875ad0e937\" not found" Jul 1 08:43:08.324600 systemd[1]: Created slice kubepods-burstable-podf12393058608a2f5b7f7598cae5c81ec.slice - libcontainer container kubepods-burstable-podf12393058608a2f5b7f7598cae5c81ec.slice. Jul 1 08:43:08.342764 kubelet[2789]: E0701 08:43:08.342601 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-s-875ad0e937\" not found" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.344765 systemd[1]: Created slice kubepods-burstable-podb519080902f6b7daae098e54237729f0.slice - libcontainer container kubepods-burstable-podb519080902f6b7daae098e54237729f0.slice. Jul 1 08:43:08.349629 kubelet[2789]: E0701 08:43:08.349502 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-s-875ad0e937\" not found" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.351729 systemd[1]: Created slice kubepods-burstable-pod9a9bb211f6af2ce004224a9a45ebd76d.slice - libcontainer container kubepods-burstable-pod9a9bb211f6af2ce004224a9a45ebd76d.slice. Jul 1 08:43:08.353214 kubelet[2789]: E0701 08:43:08.353200 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-s-875ad0e937\" not found" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.363211 kubelet[2789]: I0701 08:43:08.363197 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.363499 kubelet[2789]: E0701 08:43:08.363485 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.405285 kubelet[2789]: I0701 08:43:08.403860 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a9bb211f6af2ce004224a9a45ebd76d-kubeconfig\") pod \"kube-scheduler-ci-9999.9.9-s-875ad0e937\" (UID: \"9a9bb211f6af2ce004224a9a45ebd76d\") " pod="kube-system/kube-scheduler-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.405285 kubelet[2789]: I0701 08:43:08.405195 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f12393058608a2f5b7f7598cae5c81ec-ca-certs\") pod \"kube-apiserver-ci-9999.9.9-s-875ad0e937\" (UID: \"f12393058608a2f5b7f7598cae5c81ec\") " pod="kube-system/kube-apiserver-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.405285 kubelet[2789]: I0701 08:43:08.405210 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f12393058608a2f5b7f7598cae5c81ec-k8s-certs\") pod \"kube-apiserver-ci-9999.9.9-s-875ad0e937\" (UID: \"f12393058608a2f5b7f7598cae5c81ec\") " pod="kube-system/kube-apiserver-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.405285 kubelet[2789]: I0701 08:43:08.405221 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f12393058608a2f5b7f7598cae5c81ec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999.9.9-s-875ad0e937\" (UID: \"f12393058608a2f5b7f7598cae5c81ec\") " pod="kube-system/kube-apiserver-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.405285 kubelet[2789]: I0701 08:43:08.405237 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b519080902f6b7daae098e54237729f0-ca-certs\") pod \"kube-controller-manager-ci-9999.9.9-s-875ad0e937\" (UID: \"b519080902f6b7daae098e54237729f0\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.405398 kubelet[2789]: E0701 08:43:08.403868 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999.9.9-s-875ad0e937?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="400ms" Jul 1 08:43:08.506127 kubelet[2789]: I0701 08:43:08.506055 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b519080902f6b7daae098e54237729f0-kubeconfig\") pod \"kube-controller-manager-ci-9999.9.9-s-875ad0e937\" (UID: \"b519080902f6b7daae098e54237729f0\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.506127 kubelet[2789]: I0701 08:43:08.506125 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b519080902f6b7daae098e54237729f0-k8s-certs\") pod \"kube-controller-manager-ci-9999.9.9-s-875ad0e937\" (UID: \"b519080902f6b7daae098e54237729f0\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.506311 kubelet[2789]: I0701 08:43:08.506149 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b519080902f6b7daae098e54237729f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999.9.9-s-875ad0e937\" (UID: \"b519080902f6b7daae098e54237729f0\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.506311 kubelet[2789]: I0701 08:43:08.506184 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b519080902f6b7daae098e54237729f0-flexvolume-dir\") pod \"kube-controller-manager-ci-9999.9.9-s-875ad0e937\" (UID: \"b519080902f6b7daae098e54237729f0\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.565977 kubelet[2789]: I0701 08:43:08.565950 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.566378 kubelet[2789]: E0701 08:43:08.566337 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.644570 containerd[1721]: time="2025-07-01T08:43:08.644521162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999.9.9-s-875ad0e937,Uid:f12393058608a2f5b7f7598cae5c81ec,Namespace:kube-system,Attempt:0,}" Jul 1 08:43:08.651188 containerd[1721]: time="2025-07-01T08:43:08.651013581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999.9.9-s-875ad0e937,Uid:b519080902f6b7daae098e54237729f0,Namespace:kube-system,Attempt:0,}" Jul 1 08:43:08.654102 containerd[1721]: time="2025-07-01T08:43:08.654076657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999.9.9-s-875ad0e937,Uid:9a9bb211f6af2ce004224a9a45ebd76d,Namespace:kube-system,Attempt:0,}" Jul 1 08:43:08.724410 containerd[1721]: time="2025-07-01T08:43:08.724321335Z" level=info msg="connecting to shim 04909f7dadc509eea8c6224fe59b03a873f07daddd6d5f930089c9b0e27e9ea8" address="unix:///run/containerd/s/2b38e6edadc6c010385794b9171c8c2bdda2d458f540aa01a8222574c0aeeac7" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:43:08.752955 systemd[1]: Started cri-containerd-04909f7dadc509eea8c6224fe59b03a873f07daddd6d5f930089c9b0e27e9ea8.scope - libcontainer container 04909f7dadc509eea8c6224fe59b03a873f07daddd6d5f930089c9b0e27e9ea8. Jul 1 08:43:08.755028 containerd[1721]: time="2025-07-01T08:43:08.754991369Z" level=info msg="connecting to shim 52425ce2564a324630c3e0cece77413f78958336ea7a406c17049336c724115d" address="unix:///run/containerd/s/59b800fb19ba5a23f7d833898a31b1da2ab9f02a18b4e933a8a4ff0f625a4652" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:43:08.761330 containerd[1721]: time="2025-07-01T08:43:08.760962169Z" level=info msg="connecting to shim 31141210230d3d51f49757ce8c1ca8630ad983098cbf4cb9c6bc046cba88a395" address="unix:///run/containerd/s/c5e5b08e0e5c5fd2bf9329e653c6332d2fe4d74db1dc7f6df5ff90ebbc81f57e" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:43:08.787004 systemd[1]: Started cri-containerd-52425ce2564a324630c3e0cece77413f78958336ea7a406c17049336c724115d.scope - libcontainer container 52425ce2564a324630c3e0cece77413f78958336ea7a406c17049336c724115d. Jul 1 08:43:08.791245 systemd[1]: Started cri-containerd-31141210230d3d51f49757ce8c1ca8630ad983098cbf4cb9c6bc046cba88a395.scope - libcontainer container 31141210230d3d51f49757ce8c1ca8630ad983098cbf4cb9c6bc046cba88a395. Jul 1 08:43:08.805882 kubelet[2789]: E0701 08:43:08.805830 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999.9.9-s-875ad0e937?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="800ms" Jul 1 08:43:08.836391 containerd[1721]: time="2025-07-01T08:43:08.836338878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999.9.9-s-875ad0e937,Uid:f12393058608a2f5b7f7598cae5c81ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"04909f7dadc509eea8c6224fe59b03a873f07daddd6d5f930089c9b0e27e9ea8\"" Jul 1 08:43:08.842149 containerd[1721]: time="2025-07-01T08:43:08.842096701Z" level=info msg="CreateContainer within sandbox \"04909f7dadc509eea8c6224fe59b03a873f07daddd6d5f930089c9b0e27e9ea8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 1 08:43:08.853550 containerd[1721]: time="2025-07-01T08:43:08.853517370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999.9.9-s-875ad0e937,Uid:b519080902f6b7daae098e54237729f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"52425ce2564a324630c3e0cece77413f78958336ea7a406c17049336c724115d\"" Jul 1 08:43:08.857074 containerd[1721]: time="2025-07-01T08:43:08.857044846Z" level=info msg="CreateContainer within sandbox \"52425ce2564a324630c3e0cece77413f78958336ea7a406c17049336c724115d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 1 08:43:08.862878 containerd[1721]: time="2025-07-01T08:43:08.862857719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999.9.9-s-875ad0e937,Uid:9a9bb211f6af2ce004224a9a45ebd76d,Namespace:kube-system,Attempt:0,} returns sandbox id \"31141210230d3d51f49757ce8c1ca8630ad983098cbf4cb9c6bc046cba88a395\"" Jul 1 08:43:08.864208 containerd[1721]: time="2025-07-01T08:43:08.864189935Z" level=info msg="CreateContainer within sandbox \"31141210230d3d51f49757ce8c1ca8630ad983098cbf4cb9c6bc046cba88a395\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 1 08:43:08.884976 containerd[1721]: time="2025-07-01T08:43:08.884954669Z" level=info msg="Container 5a95e46e763edc9db4c732603f87c291bd810165842521f0786d707edcae57e8: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:43:08.890670 containerd[1721]: time="2025-07-01T08:43:08.890577001Z" level=info msg="Container 0cd3cc8109c645eb64d268cb65321f7abea15985672cc0e8c2f1f7e3c64e930d: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:43:08.896427 containerd[1721]: time="2025-07-01T08:43:08.896402367Z" level=info msg="Container 286754b1000e0658d295f0b8a3a7db0d2a707694892902bfbe541f2d00cc85a3: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:43:08.914622 containerd[1721]: time="2025-07-01T08:43:08.914592106Z" level=info msg="CreateContainer within sandbox \"31141210230d3d51f49757ce8c1ca8630ad983098cbf4cb9c6bc046cba88a395\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0cd3cc8109c645eb64d268cb65321f7abea15985672cc0e8c2f1f7e3c64e930d\"" Jul 1 08:43:08.914991 containerd[1721]: time="2025-07-01T08:43:08.914972923Z" level=info msg="StartContainer for \"0cd3cc8109c645eb64d268cb65321f7abea15985672cc0e8c2f1f7e3c64e930d\"" Jul 1 08:43:08.915596 containerd[1721]: time="2025-07-01T08:43:08.915565138Z" level=info msg="connecting to shim 0cd3cc8109c645eb64d268cb65321f7abea15985672cc0e8c2f1f7e3c64e930d" address="unix:///run/containerd/s/c5e5b08e0e5c5fd2bf9329e653c6332d2fe4d74db1dc7f6df5ff90ebbc81f57e" protocol=ttrpc version=3 Jul 1 08:43:08.924972 containerd[1721]: time="2025-07-01T08:43:08.924951967Z" level=info msg="CreateContainer within sandbox \"04909f7dadc509eea8c6224fe59b03a873f07daddd6d5f930089c9b0e27e9ea8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5a95e46e763edc9db4c732603f87c291bd810165842521f0786d707edcae57e8\"" Jul 1 08:43:08.925335 containerd[1721]: time="2025-07-01T08:43:08.925305262Z" level=info msg="StartContainer for \"5a95e46e763edc9db4c732603f87c291bd810165842521f0786d707edcae57e8\"" Jul 1 08:43:08.926457 containerd[1721]: time="2025-07-01T08:43:08.926428052Z" level=info msg="connecting to shim 5a95e46e763edc9db4c732603f87c291bd810165842521f0786d707edcae57e8" address="unix:///run/containerd/s/2b38e6edadc6c010385794b9171c8c2bdda2d458f540aa01a8222574c0aeeac7" protocol=ttrpc version=3 Jul 1 08:43:08.930014 systemd[1]: Started cri-containerd-0cd3cc8109c645eb64d268cb65321f7abea15985672cc0e8c2f1f7e3c64e930d.scope - libcontainer container 0cd3cc8109c645eb64d268cb65321f7abea15985672cc0e8c2f1f7e3c64e930d. Jul 1 08:43:08.933559 containerd[1721]: time="2025-07-01T08:43:08.933518172Z" level=info msg="CreateContainer within sandbox \"52425ce2564a324630c3e0cece77413f78958336ea7a406c17049336c724115d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"286754b1000e0658d295f0b8a3a7db0d2a707694892902bfbe541f2d00cc85a3\"" Jul 1 08:43:08.934697 containerd[1721]: time="2025-07-01T08:43:08.934676669Z" level=info msg="StartContainer for \"286754b1000e0658d295f0b8a3a7db0d2a707694892902bfbe541f2d00cc85a3\"" Jul 1 08:43:08.941079 containerd[1721]: time="2025-07-01T08:43:08.941034980Z" level=info msg="connecting to shim 286754b1000e0658d295f0b8a3a7db0d2a707694892902bfbe541f2d00cc85a3" address="unix:///run/containerd/s/59b800fb19ba5a23f7d833898a31b1da2ab9f02a18b4e933a8a4ff0f625a4652" protocol=ttrpc version=3 Jul 1 08:43:08.942256 systemd[1]: Started cri-containerd-5a95e46e763edc9db4c732603f87c291bd810165842521f0786d707edcae57e8.scope - libcontainer container 5a95e46e763edc9db4c732603f87c291bd810165842521f0786d707edcae57e8. Jul 1 08:43:08.963880 systemd[1]: Started cri-containerd-286754b1000e0658d295f0b8a3a7db0d2a707694892902bfbe541f2d00cc85a3.scope - libcontainer container 286754b1000e0658d295f0b8a3a7db0d2a707694892902bfbe541f2d00cc85a3. Jul 1 08:43:08.969266 kubelet[2789]: I0701 08:43:08.968960 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:08.969266 kubelet[2789]: E0701 08:43:08.969245 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:09.000418 containerd[1721]: time="2025-07-01T08:43:08.999953605Z" level=info msg="StartContainer for \"0cd3cc8109c645eb64d268cb65321f7abea15985672cc0e8c2f1f7e3c64e930d\" returns successfully" Jul 1 08:43:09.055155 containerd[1721]: time="2025-07-01T08:43:09.055086779Z" level=info msg="StartContainer for \"286754b1000e0658d295f0b8a3a7db0d2a707694892902bfbe541f2d00cc85a3\" returns successfully" Jul 1 08:43:09.055781 containerd[1721]: time="2025-07-01T08:43:09.055744748Z" level=info msg="StartContainer for \"5a95e46e763edc9db4c732603f87c291bd810165842521f0786d707edcae57e8\" returns successfully" Jul 1 08:43:09.228730 kubelet[2789]: E0701 08:43:09.228573 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-s-875ad0e937\" not found" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:09.232979 kubelet[2789]: E0701 08:43:09.232962 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-s-875ad0e937\" not found" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:09.241865 kubelet[2789]: E0701 08:43:09.241710 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-s-875ad0e937\" not found" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:09.771548 kubelet[2789]: I0701 08:43:09.771525 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:10.242821 kubelet[2789]: E0701 08:43:10.242790 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-s-875ad0e937\" not found" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:10.243155 kubelet[2789]: E0701 08:43:10.243112 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-s-875ad0e937\" not found" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:10.512524 kubelet[2789]: E0701 08:43:10.512228 2789 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-9999.9.9-s-875ad0e937\" not found" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:10.577707 kubelet[2789]: I0701 08:43:10.577663 2789 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:10.602539 kubelet[2789]: I0701 08:43:10.602494 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:10.764666 kubelet[2789]: E0701 08:43:10.764572 2789 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-9999.9.9-s-875ad0e937\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:10.764666 kubelet[2789]: I0701 08:43:10.764604 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:10.766807 kubelet[2789]: E0701 08:43:10.766772 2789 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999.9.9-s-875ad0e937\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:10.766807 kubelet[2789]: I0701 08:43:10.766794 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:10.768013 kubelet[2789]: E0701 08:43:10.767980 2789 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999.9.9-s-875ad0e937\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:11.187946 kubelet[2789]: I0701 08:43:11.187901 2789 apiserver.go:52] "Watching apiserver" Jul 1 08:43:11.203091 kubelet[2789]: I0701 08:43:11.203068 2789 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 1 08:43:13.358710 systemd[1]: Reload requested from client PID 3061 ('systemctl') (unit session-9.scope)... Jul 1 08:43:13.358724 systemd[1]: Reloading... Jul 1 08:43:13.421772 zram_generator::config[3107]: No configuration found. Jul 1 08:43:13.499872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:43:13.591242 systemd[1]: Reloading finished in 232 ms. Jul 1 08:43:13.609709 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:43:13.616281 systemd[1]: kubelet.service: Deactivated successfully. Jul 1 08:43:13.616508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:43:13.616540 systemd[1]: kubelet.service: Consumed 560ms CPU time, 131M memory peak. Jul 1 08:43:13.618334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:43:14.142637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:43:14.151005 (kubelet)[3174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 1 08:43:14.186598 kubelet[3174]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:43:14.187706 kubelet[3174]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 1 08:43:14.187706 kubelet[3174]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:43:14.187706 kubelet[3174]: I0701 08:43:14.186861 3174 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 1 08:43:14.193443 kubelet[3174]: I0701 08:43:14.193422 3174 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 1 08:43:14.193443 kubelet[3174]: I0701 08:43:14.193438 3174 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 1 08:43:14.193642 kubelet[3174]: I0701 08:43:14.193632 3174 server.go:954] "Client rotation is on, will bootstrap in background" Jul 1 08:43:14.194416 kubelet[3174]: I0701 08:43:14.194400 3174 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 1 08:43:14.196139 kubelet[3174]: I0701 08:43:14.195950 3174 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 1 08:43:14.202920 kubelet[3174]: I0701 08:43:14.201697 3174 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 1 08:43:14.203738 kubelet[3174]: I0701 08:43:14.203714 3174 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 1 08:43:14.204048 kubelet[3174]: I0701 08:43:14.204023 3174 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 1 08:43:14.204180 kubelet[3174]: I0701 08:43:14.204044 3174 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999.9.9-s-875ad0e937","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 1 08:43:14.204276 kubelet[3174]: I0701 08:43:14.204184 3174 topology_manager.go:138] "Creating topology manager with none policy" Jul 1 08:43:14.204276 kubelet[3174]: I0701 08:43:14.204193 3174 container_manager_linux.go:304] "Creating device plugin manager" Jul 1 08:43:14.204276 kubelet[3174]: I0701 08:43:14.204240 3174 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:43:14.204372 kubelet[3174]: I0701 08:43:14.204345 3174 kubelet.go:446] "Attempting to sync node with API server" Jul 1 08:43:14.204394 kubelet[3174]: I0701 08:43:14.204376 3174 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 1 08:43:14.204413 kubelet[3174]: I0701 08:43:14.204395 3174 kubelet.go:352] "Adding apiserver pod source" Jul 1 08:43:14.204413 kubelet[3174]: I0701 08:43:14.204403 3174 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 1 08:43:14.207049 kubelet[3174]: I0701 08:43:14.206013 3174 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 1 08:43:14.207049 kubelet[3174]: I0701 08:43:14.206335 3174 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 1 08:43:14.207049 kubelet[3174]: I0701 08:43:14.206680 3174 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 1 08:43:14.207049 kubelet[3174]: I0701 08:43:14.206700 3174 server.go:1287] "Started kubelet" Jul 1 08:43:14.209624 kubelet[3174]: I0701 08:43:14.209582 3174 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 1 08:43:14.209955 kubelet[3174]: I0701 08:43:14.209944 3174 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 1 08:43:14.210062 kubelet[3174]: I0701 08:43:14.210050 3174 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 1 08:43:14.210940 kubelet[3174]: I0701 08:43:14.210915 3174 server.go:479] "Adding debug handlers to kubelet server" Jul 1 08:43:14.212527 kubelet[3174]: I0701 08:43:14.211703 3174 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 1 08:43:14.220507 kubelet[3174]: I0701 08:43:14.220490 3174 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 1 08:43:14.220882 kubelet[3174]: I0701 08:43:14.220871 3174 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 1 08:43:14.222025 kubelet[3174]: I0701 08:43:14.222010 3174 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 1 08:43:14.222883 kubelet[3174]: E0701 08:43:14.221573 3174 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-9999.9.9-s-875ad0e937\" not found" Jul 1 08:43:14.225185 kubelet[3174]: I0701 08:43:14.223062 3174 reconciler.go:26] "Reconciler: start to sync state" Jul 1 08:43:14.227093 kubelet[3174]: I0701 08:43:14.225729 3174 factory.go:221] Registration of the systemd container factory successfully Jul 1 08:43:14.227266 kubelet[3174]: I0701 08:43:14.227249 3174 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 1 08:43:14.229896 kubelet[3174]: E0701 08:43:14.229881 3174 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 1 08:43:14.230943 kubelet[3174]: I0701 08:43:14.230927 3174 factory.go:221] Registration of the containerd container factory successfully Jul 1 08:43:14.237223 kubelet[3174]: I0701 08:43:14.237191 3174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 1 08:43:14.238183 kubelet[3174]: I0701 08:43:14.238157 3174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 1 08:43:14.238183 kubelet[3174]: I0701 08:43:14.238184 3174 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 1 08:43:14.238271 kubelet[3174]: I0701 08:43:14.238201 3174 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 1 08:43:14.238271 kubelet[3174]: I0701 08:43:14.238214 3174 kubelet.go:2382] "Starting kubelet main sync loop" Jul 1 08:43:14.238271 kubelet[3174]: E0701 08:43:14.238250 3174 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 1 08:43:14.262880 kubelet[3174]: I0701 08:43:14.262865 3174 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 1 08:43:14.262880 kubelet[3174]: I0701 08:43:14.262876 3174 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 1 08:43:14.263201 kubelet[3174]: I0701 08:43:14.262893 3174 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:43:14.263201 kubelet[3174]: I0701 08:43:14.262996 3174 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 1 08:43:14.263201 kubelet[3174]: I0701 08:43:14.263003 3174 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 1 08:43:14.263201 kubelet[3174]: I0701 08:43:14.263017 3174 policy_none.go:49] "None policy: Start" Jul 1 08:43:14.263201 kubelet[3174]: I0701 08:43:14.263025 3174 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 1 08:43:14.263201 kubelet[3174]: I0701 08:43:14.263032 3174 state_mem.go:35] "Initializing new in-memory state store" Jul 1 08:43:14.263201 kubelet[3174]: I0701 08:43:14.263107 3174 state_mem.go:75] "Updated machine memory state" Jul 1 08:43:14.266564 kubelet[3174]: I0701 08:43:14.266549 3174 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 1 08:43:14.266712 kubelet[3174]: I0701 08:43:14.266659 3174 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 1 08:43:14.266712 kubelet[3174]: I0701 08:43:14.266670 3174 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 1 08:43:14.267043 kubelet[3174]: I0701 08:43:14.266930 3174 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 1 08:43:14.269795 kubelet[3174]: E0701 08:43:14.269780 3174 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 1 08:43:14.339874 kubelet[3174]: I0701 08:43:14.339550 3174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.340365 kubelet[3174]: I0701 08:43:14.340084 3174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.340638 kubelet[3174]: I0701 08:43:14.340218 3174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.348371 kubelet[3174]: W0701 08:43:14.348356 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 1 08:43:14.352673 kubelet[3174]: W0701 08:43:14.352657 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 1 08:43:14.353997 kubelet[3174]: W0701 08:43:14.353976 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 1 08:43:14.370140 kubelet[3174]: I0701 08:43:14.370125 3174 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.387873 kubelet[3174]: I0701 08:43:14.387857 3174 kubelet_node_status.go:124] "Node was previously registered" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.388003 kubelet[3174]: I0701 08:43:14.387907 3174 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.428051 kubelet[3174]: I0701 08:43:14.427914 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f12393058608a2f5b7f7598cae5c81ec-k8s-certs\") pod \"kube-apiserver-ci-9999.9.9-s-875ad0e937\" (UID: \"f12393058608a2f5b7f7598cae5c81ec\") " pod="kube-system/kube-apiserver-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.428951 kubelet[3174]: I0701 08:43:14.428690 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f12393058608a2f5b7f7598cae5c81ec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999.9.9-s-875ad0e937\" (UID: \"f12393058608a2f5b7f7598cae5c81ec\") " pod="kube-system/kube-apiserver-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.428951 kubelet[3174]: I0701 08:43:14.428774 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b519080902f6b7daae098e54237729f0-flexvolume-dir\") pod \"kube-controller-manager-ci-9999.9.9-s-875ad0e937\" (UID: \"b519080902f6b7daae098e54237729f0\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.428951 kubelet[3174]: I0701 08:43:14.428800 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b519080902f6b7daae098e54237729f0-k8s-certs\") pod \"kube-controller-manager-ci-9999.9.9-s-875ad0e937\" (UID: \"b519080902f6b7daae098e54237729f0\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.428951 kubelet[3174]: I0701 08:43:14.428816 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a9bb211f6af2ce004224a9a45ebd76d-kubeconfig\") pod \"kube-scheduler-ci-9999.9.9-s-875ad0e937\" (UID: \"9a9bb211f6af2ce004224a9a45ebd76d\") " pod="kube-system/kube-scheduler-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.428951 kubelet[3174]: I0701 08:43:14.428831 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f12393058608a2f5b7f7598cae5c81ec-ca-certs\") pod \"kube-apiserver-ci-9999.9.9-s-875ad0e937\" (UID: \"f12393058608a2f5b7f7598cae5c81ec\") " pod="kube-system/kube-apiserver-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.429097 kubelet[3174]: I0701 08:43:14.428868 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b519080902f6b7daae098e54237729f0-ca-certs\") pod \"kube-controller-manager-ci-9999.9.9-s-875ad0e937\" (UID: \"b519080902f6b7daae098e54237729f0\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.429097 kubelet[3174]: I0701 08:43:14.428883 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b519080902f6b7daae098e54237729f0-kubeconfig\") pod \"kube-controller-manager-ci-9999.9.9-s-875ad0e937\" (UID: \"b519080902f6b7daae098e54237729f0\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:14.429097 kubelet[3174]: I0701 08:43:14.428901 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b519080902f6b7daae098e54237729f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999.9.9-s-875ad0e937\" (UID: \"b519080902f6b7daae098e54237729f0\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:15.209168 kubelet[3174]: I0701 08:43:15.209132 3174 apiserver.go:52] "Watching apiserver" Jul 1 08:43:15.223145 kubelet[3174]: I0701 08:43:15.223104 3174 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 1 08:43:15.257710 kubelet[3174]: I0701 08:43:15.255869 3174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:15.257710 kubelet[3174]: I0701 08:43:15.255869 3174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:15.266768 kubelet[3174]: W0701 08:43:15.265913 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 1 08:43:15.266768 kubelet[3174]: E0701 08:43:15.265966 3174 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999.9.9-s-875ad0e937\" already exists" pod="kube-system/kube-scheduler-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:15.266768 kubelet[3174]: W0701 08:43:15.266557 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 1 08:43:15.266768 kubelet[3174]: E0701 08:43:15.266636 3174 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999.9.9-s-875ad0e937\" already exists" pod="kube-system/kube-apiserver-ci-9999.9.9-s-875ad0e937" Jul 1 08:43:15.272796 kubelet[3174]: I0701 08:43:15.272705 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-9999.9.9-s-875ad0e937" podStartSLOduration=1.2726943689999999 podStartE2EDuration="1.272694369s" podCreationTimestamp="2025-07-01 08:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:43:15.272592078 +0000 UTC m=+1.118203167" watchObservedRunningTime="2025-07-01 08:43:15.272694369 +0000 UTC m=+1.118305460" Jul 1 08:43:15.289007 kubelet[3174]: I0701 08:43:15.288970 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-9999.9.9-s-875ad0e937" podStartSLOduration=1.288956617 podStartE2EDuration="1.288956617s" podCreationTimestamp="2025-07-01 08:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:43:15.280841859 +0000 UTC m=+1.126452943" watchObservedRunningTime="2025-07-01 08:43:15.288956617 +0000 UTC m=+1.134567706" Jul 1 08:43:15.298385 kubelet[3174]: I0701 08:43:15.298354 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-9999.9.9-s-875ad0e937" podStartSLOduration=1.298344904 podStartE2EDuration="1.298344904s" podCreationTimestamp="2025-07-01 08:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:43:15.289237558 +0000 UTC m=+1.134848647" watchObservedRunningTime="2025-07-01 08:43:15.298344904 +0000 UTC m=+1.143955996" Jul 1 08:43:18.994468 kubelet[3174]: I0701 08:43:18.994438 3174 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 1 08:43:18.994938 containerd[1721]: time="2025-07-01T08:43:18.994771556Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 1 08:43:18.995122 kubelet[3174]: I0701 08:43:18.994999 3174 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 1 08:43:19.965413 systemd[1]: Created slice kubepods-besteffort-pod9ce75006_2f93_46e1_acec_39a3448aa8ea.slice - libcontainer container kubepods-besteffort-pod9ce75006_2f93_46e1_acec_39a3448aa8ea.slice. Jul 1 08:43:20.028295 kubelet[3174]: W0701 08:43:20.028271 3174 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-9999.9.9-s-875ad0e937" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-9999.9.9-s-875ad0e937' and this object Jul 1 08:43:20.028704 kubelet[3174]: I0701 08:43:20.028386 3174 status_manager.go:890] "Failed to get status for pod" podUID="b6a542b6-8667-4b14-84c5-2ee0e0942a9b" pod="tigera-operator/tigera-operator-747864d56d-gtnvz" err="pods \"tigera-operator-747864d56d-gtnvz\" is forbidden: User \"system:node:ci-9999.9.9-s-875ad0e937\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-9999.9.9-s-875ad0e937' and this object" Jul 1 08:43:20.028704 kubelet[3174]: E0701 08:43:20.028638 3174 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-9999.9.9-s-875ad0e937\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-9999.9.9-s-875ad0e937' and this object" logger="UnhandledError" Jul 1 08:43:20.028704 kubelet[3174]: W0701 08:43:20.028571 3174 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-9999.9.9-s-875ad0e937" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-9999.9.9-s-875ad0e937' and this object Jul 1 08:43:20.028704 kubelet[3174]: E0701 08:43:20.028670 3174 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-9999.9.9-s-875ad0e937\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-9999.9.9-s-875ad0e937' and this object" logger="UnhandledError" Jul 1 08:43:20.031871 systemd[1]: Created slice kubepods-besteffort-podb6a542b6_8667_4b14_84c5_2ee0e0942a9b.slice - libcontainer container kubepods-besteffort-podb6a542b6_8667_4b14_84c5_2ee0e0942a9b.slice. Jul 1 08:43:20.069503 kubelet[3174]: I0701 08:43:20.069467 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8c48\" (UniqueName: \"kubernetes.io/projected/b6a542b6-8667-4b14-84c5-2ee0e0942a9b-kube-api-access-x8c48\") pod \"tigera-operator-747864d56d-gtnvz\" (UID: \"b6a542b6-8667-4b14-84c5-2ee0e0942a9b\") " pod="tigera-operator/tigera-operator-747864d56d-gtnvz" Jul 1 08:43:20.069593 kubelet[3174]: I0701 08:43:20.069526 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ce75006-2f93-46e1-acec-39a3448aa8ea-xtables-lock\") pod \"kube-proxy-xggmx\" (UID: \"9ce75006-2f93-46e1-acec-39a3448aa8ea\") " pod="kube-system/kube-proxy-xggmx" Jul 1 08:43:20.069593 kubelet[3174]: I0701 08:43:20.069545 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ce75006-2f93-46e1-acec-39a3448aa8ea-lib-modules\") pod \"kube-proxy-xggmx\" (UID: \"9ce75006-2f93-46e1-acec-39a3448aa8ea\") " pod="kube-system/kube-proxy-xggmx" Jul 1 08:43:20.069593 kubelet[3174]: I0701 08:43:20.069560 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl7vc\" (UniqueName: \"kubernetes.io/projected/9ce75006-2f93-46e1-acec-39a3448aa8ea-kube-api-access-zl7vc\") pod \"kube-proxy-xggmx\" (UID: \"9ce75006-2f93-46e1-acec-39a3448aa8ea\") " pod="kube-system/kube-proxy-xggmx" Jul 1 08:43:20.069593 kubelet[3174]: I0701 08:43:20.069575 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ce75006-2f93-46e1-acec-39a3448aa8ea-kube-proxy\") pod \"kube-proxy-xggmx\" (UID: \"9ce75006-2f93-46e1-acec-39a3448aa8ea\") " pod="kube-system/kube-proxy-xggmx" Jul 1 08:43:20.069593 kubelet[3174]: I0701 08:43:20.069589 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b6a542b6-8667-4b14-84c5-2ee0e0942a9b-var-lib-calico\") pod \"tigera-operator-747864d56d-gtnvz\" (UID: \"b6a542b6-8667-4b14-84c5-2ee0e0942a9b\") " pod="tigera-operator/tigera-operator-747864d56d-gtnvz" Jul 1 08:43:20.276921 containerd[1721]: time="2025-07-01T08:43:20.276884174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xggmx,Uid:9ce75006-2f93-46e1-acec-39a3448aa8ea,Namespace:kube-system,Attempt:0,}" Jul 1 08:43:20.312108 containerd[1721]: time="2025-07-01T08:43:20.312065057Z" level=info msg="connecting to shim b6f6f848071ed183dc80039b0d89fec57dabb1da9c516b6ea6460e5953b52086" address="unix:///run/containerd/s/efa6f025dbc9a81771eabb413c064c1343f79cbf4617c8728d96bbb9c4dc823c" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:43:20.336917 systemd[1]: Started cri-containerd-b6f6f848071ed183dc80039b0d89fec57dabb1da9c516b6ea6460e5953b52086.scope - libcontainer container b6f6f848071ed183dc80039b0d89fec57dabb1da9c516b6ea6460e5953b52086. Jul 1 08:43:20.357559 containerd[1721]: time="2025-07-01T08:43:20.357532841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xggmx,Uid:9ce75006-2f93-46e1-acec-39a3448aa8ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6f6f848071ed183dc80039b0d89fec57dabb1da9c516b6ea6460e5953b52086\"" Jul 1 08:43:20.360267 containerd[1721]: time="2025-07-01T08:43:20.360238674Z" level=info msg="CreateContainer within sandbox \"b6f6f848071ed183dc80039b0d89fec57dabb1da9c516b6ea6460e5953b52086\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 1 08:43:20.378726 containerd[1721]: time="2025-07-01T08:43:20.378694001Z" level=info msg="Container 0c388d15d6e58082552548f00d5958e5974d1c66ad120c2666b8962832a4146d: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:43:20.400183 containerd[1721]: time="2025-07-01T08:43:20.400161155Z" level=info msg="CreateContainer within sandbox \"b6f6f848071ed183dc80039b0d89fec57dabb1da9c516b6ea6460e5953b52086\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0c388d15d6e58082552548f00d5958e5974d1c66ad120c2666b8962832a4146d\"" Jul 1 08:43:20.400660 containerd[1721]: time="2025-07-01T08:43:20.400549164Z" level=info msg="StartContainer for \"0c388d15d6e58082552548f00d5958e5974d1c66ad120c2666b8962832a4146d\"" Jul 1 08:43:20.401999 containerd[1721]: time="2025-07-01T08:43:20.401962112Z" level=info msg="connecting to shim 0c388d15d6e58082552548f00d5958e5974d1c66ad120c2666b8962832a4146d" address="unix:///run/containerd/s/efa6f025dbc9a81771eabb413c064c1343f79cbf4617c8728d96bbb9c4dc823c" protocol=ttrpc version=3 Jul 1 08:43:20.419888 systemd[1]: Started cri-containerd-0c388d15d6e58082552548f00d5958e5974d1c66ad120c2666b8962832a4146d.scope - libcontainer container 0c388d15d6e58082552548f00d5958e5974d1c66ad120c2666b8962832a4146d. Jul 1 08:43:20.450588 containerd[1721]: time="2025-07-01T08:43:20.450555932Z" level=info msg="StartContainer for \"0c388d15d6e58082552548f00d5958e5974d1c66ad120c2666b8962832a4146d\" returns successfully" Jul 1 08:43:21.175560 kubelet[3174]: E0701 08:43:21.175535 3174 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 1 08:43:21.175560 kubelet[3174]: E0701 08:43:21.175560 3174 projected.go:194] Error preparing data for projected volume kube-api-access-x8c48 for pod tigera-operator/tigera-operator-747864d56d-gtnvz: failed to sync configmap cache: timed out waiting for the condition Jul 1 08:43:21.175930 kubelet[3174]: E0701 08:43:21.175620 3174 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6a542b6-8667-4b14-84c5-2ee0e0942a9b-kube-api-access-x8c48 podName:b6a542b6-8667-4b14-84c5-2ee0e0942a9b nodeName:}" failed. No retries permitted until 2025-07-01 08:43:21.675599537 +0000 UTC m=+7.521210626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x8c48" (UniqueName: "kubernetes.io/projected/b6a542b6-8667-4b14-84c5-2ee0e0942a9b-kube-api-access-x8c48") pod "tigera-operator-747864d56d-gtnvz" (UID: "b6a542b6-8667-4b14-84c5-2ee0e0942a9b") : failed to sync configmap cache: timed out waiting for the condition Jul 1 08:43:21.275864 kubelet[3174]: I0701 08:43:21.275822 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xggmx" podStartSLOduration=2.275806882 podStartE2EDuration="2.275806882s" podCreationTimestamp="2025-07-01 08:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:43:21.275086015 +0000 UTC m=+7.120697106" watchObservedRunningTime="2025-07-01 08:43:21.275806882 +0000 UTC m=+7.121417970" Jul 1 08:43:21.835680 containerd[1721]: time="2025-07-01T08:43:21.835641308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-gtnvz,Uid:b6a542b6-8667-4b14-84c5-2ee0e0942a9b,Namespace:tigera-operator,Attempt:0,}" Jul 1 08:43:21.878768 containerd[1721]: time="2025-07-01T08:43:21.878480160Z" level=info msg="connecting to shim c51b1f681e3f9d80ad76df32e3df628b6f498c6a1c19b2eaceaa8df27ccd6fd5" address="unix:///run/containerd/s/55f42cd6046335b2462f3b24d92b60d432c9fb421251dcb542c2fd432e9c13c5" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:43:21.901899 systemd[1]: Started cri-containerd-c51b1f681e3f9d80ad76df32e3df628b6f498c6a1c19b2eaceaa8df27ccd6fd5.scope - libcontainer container c51b1f681e3f9d80ad76df32e3df628b6f498c6a1c19b2eaceaa8df27ccd6fd5. Jul 1 08:43:21.935592 containerd[1721]: time="2025-07-01T08:43:21.935559464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-gtnvz,Uid:b6a542b6-8667-4b14-84c5-2ee0e0942a9b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c51b1f681e3f9d80ad76df32e3df628b6f498c6a1c19b2eaceaa8df27ccd6fd5\"" Jul 1 08:43:21.936775 containerd[1721]: time="2025-07-01T08:43:21.936742512Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 1 08:43:23.202880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2883658532.mount: Deactivated successfully. Jul 1 08:43:24.190291 containerd[1721]: time="2025-07-01T08:43:24.190254082Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:24.193042 containerd[1721]: time="2025-07-01T08:43:24.193014306Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 1 08:43:24.196444 containerd[1721]: time="2025-07-01T08:43:24.196407091Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:24.202256 containerd[1721]: time="2025-07-01T08:43:24.202217731Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:24.202685 containerd[1721]: time="2025-07-01T08:43:24.202609652Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.265813174s" Jul 1 08:43:24.202685 containerd[1721]: time="2025-07-01T08:43:24.202633358Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 1 08:43:24.204199 containerd[1721]: time="2025-07-01T08:43:24.204165229Z" level=info msg="CreateContainer within sandbox \"c51b1f681e3f9d80ad76df32e3df628b6f498c6a1c19b2eaceaa8df27ccd6fd5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 1 08:43:24.227793 containerd[1721]: time="2025-07-01T08:43:24.227713917Z" level=info msg="Container ea419c9f093e15784e2a228b2642cb4ba23d160513cfe80499c8e20dff6c893f: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:43:24.230934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543299873.mount: Deactivated successfully. Jul 1 08:43:24.241183 containerd[1721]: time="2025-07-01T08:43:24.241162447Z" level=info msg="CreateContainer within sandbox \"c51b1f681e3f9d80ad76df32e3df628b6f498c6a1c19b2eaceaa8df27ccd6fd5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ea419c9f093e15784e2a228b2642cb4ba23d160513cfe80499c8e20dff6c893f\"" Jul 1 08:43:24.241546 containerd[1721]: time="2025-07-01T08:43:24.241524046Z" level=info msg="StartContainer for \"ea419c9f093e15784e2a228b2642cb4ba23d160513cfe80499c8e20dff6c893f\"" Jul 1 08:43:24.242232 containerd[1721]: time="2025-07-01T08:43:24.242200592Z" level=info msg="connecting to shim ea419c9f093e15784e2a228b2642cb4ba23d160513cfe80499c8e20dff6c893f" address="unix:///run/containerd/s/55f42cd6046335b2462f3b24d92b60d432c9fb421251dcb542c2fd432e9c13c5" protocol=ttrpc version=3 Jul 1 08:43:24.261879 systemd[1]: Started cri-containerd-ea419c9f093e15784e2a228b2642cb4ba23d160513cfe80499c8e20dff6c893f.scope - libcontainer container ea419c9f093e15784e2a228b2642cb4ba23d160513cfe80499c8e20dff6c893f. Jul 1 08:43:24.299767 containerd[1721]: time="2025-07-01T08:43:24.298612602Z" level=info msg="StartContainer for \"ea419c9f093e15784e2a228b2642cb4ba23d160513cfe80499c8e20dff6c893f\" returns successfully" Jul 1 08:43:29.002562 kubelet[3174]: I0701 08:43:29.002357 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-gtnvz" podStartSLOduration=7.735438857 podStartE2EDuration="10.002339597s" podCreationTimestamp="2025-07-01 08:43:19 +0000 UTC" firstStartedPulling="2025-07-01 08:43:21.936282884 +0000 UTC m=+7.781893972" lastFinishedPulling="2025-07-01 08:43:24.203183635 +0000 UTC m=+10.048794712" observedRunningTime="2025-07-01 08:43:25.285474335 +0000 UTC m=+11.131085426" watchObservedRunningTime="2025-07-01 08:43:29.002339597 +0000 UTC m=+14.847950688" Jul 1 08:43:29.622602 sudo[2187]: pam_unix(sudo:session): session closed for user root Jul 1 08:43:29.734879 sshd[2186]: Connection closed by 10.200.16.10 port 60660 Jul 1 08:43:29.733906 sshd-session[2183]: pam_unix(sshd:session): session closed for user core Jul 1 08:43:29.737828 systemd-logind[1700]: Session 9 logged out. Waiting for processes to exit. Jul 1 08:43:29.738938 systemd[1]: sshd@6-10.200.8.13:22-10.200.16.10:60660.service: Deactivated successfully. Jul 1 08:43:29.742058 systemd[1]: session-9.scope: Deactivated successfully. Jul 1 08:43:29.742377 systemd[1]: session-9.scope: Consumed 2.915s CPU time, 227.2M memory peak. Jul 1 08:43:29.747652 systemd-logind[1700]: Removed session 9. Jul 1 08:43:32.252485 systemd[1]: Created slice kubepods-besteffort-poda43cc819_ecfa_4410_9b06_a9948345cf82.slice - libcontainer container kubepods-besteffort-poda43cc819_ecfa_4410_9b06_a9948345cf82.slice. Jul 1 08:43:32.342881 kubelet[3174]: I0701 08:43:32.342736 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a43cc819-ecfa-4410-9b06-a9948345cf82-typha-certs\") pod \"calico-typha-9c5d66c57-v2vzg\" (UID: \"a43cc819-ecfa-4410-9b06-a9948345cf82\") " pod="calico-system/calico-typha-9c5d66c57-v2vzg" Jul 1 08:43:32.342881 kubelet[3174]: I0701 08:43:32.342813 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a43cc819-ecfa-4410-9b06-a9948345cf82-tigera-ca-bundle\") pod \"calico-typha-9c5d66c57-v2vzg\" (UID: \"a43cc819-ecfa-4410-9b06-a9948345cf82\") " pod="calico-system/calico-typha-9c5d66c57-v2vzg" Jul 1 08:43:32.342881 kubelet[3174]: I0701 08:43:32.342831 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfmhw\" (UniqueName: \"kubernetes.io/projected/a43cc819-ecfa-4410-9b06-a9948345cf82-kube-api-access-xfmhw\") pod \"calico-typha-9c5d66c57-v2vzg\" (UID: \"a43cc819-ecfa-4410-9b06-a9948345cf82\") " pod="calico-system/calico-typha-9c5d66c57-v2vzg" Jul 1 08:43:32.537267 systemd[1]: Created slice kubepods-besteffort-podc9b1ae95_8c4c_43a3_9ff4_059c376d0e3b.slice - libcontainer container kubepods-besteffort-podc9b1ae95_8c4c_43a3_9ff4_059c376d0e3b.slice. Jul 1 08:43:32.556032 containerd[1721]: time="2025-07-01T08:43:32.555999316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9c5d66c57-v2vzg,Uid:a43cc819-ecfa-4410-9b06-a9948345cf82,Namespace:calico-system,Attempt:0,}" Jul 1 08:43:32.594949 containerd[1721]: time="2025-07-01T08:43:32.594888018Z" level=info msg="connecting to shim 852cb48582ec0f9cb0750f4e3f48973cac3372b7ca56681bf947f2ab24b6f461" address="unix:///run/containerd/s/e90bcc19577056741df5d9f252b6742325334dc46a8599846f268ebcbefc306b" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:43:32.620888 systemd[1]: Started cri-containerd-852cb48582ec0f9cb0750f4e3f48973cac3372b7ca56681bf947f2ab24b6f461.scope - libcontainer container 852cb48582ec0f9cb0750f4e3f48973cac3372b7ca56681bf947f2ab24b6f461. Jul 1 08:43:32.644690 kubelet[3174]: I0701 08:43:32.644633 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b-cni-net-dir\") pod \"calico-node-5gd8x\" (UID: \"c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b\") " pod="calico-system/calico-node-5gd8x" Jul 1 08:43:32.644690 kubelet[3174]: I0701 08:43:32.644666 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b-flexvol-driver-host\") pod \"calico-node-5gd8x\" (UID: \"c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b\") " pod="calico-system/calico-node-5gd8x" Jul 1 08:43:32.644896 kubelet[3174]: I0701 08:43:32.644794 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b-policysync\") pod \"calico-node-5gd8x\" (UID: \"c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b\") " pod="calico-system/calico-node-5gd8x" Jul 1 08:43:32.644896 kubelet[3174]: I0701 08:43:32.644813 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b-var-lib-calico\") pod \"calico-node-5gd8x\" (UID: \"c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b\") " pod="calico-system/calico-node-5gd8x" Jul 1 08:43:32.644896 kubelet[3174]: I0701 08:43:32.644828 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b-var-run-calico\") pod \"calico-node-5gd8x\" (UID: \"c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b\") " pod="calico-system/calico-node-5gd8x" Jul 1 08:43:32.645097 kubelet[3174]: I0701 08:43:32.645006 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b-cni-bin-dir\") pod \"calico-node-5gd8x\" (UID: \"c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b\") " pod="calico-system/calico-node-5gd8x" Jul 1 08:43:32.645097 kubelet[3174]: I0701 08:43:32.645022 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b-node-certs\") pod \"calico-node-5gd8x\" (UID: \"c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b\") " pod="calico-system/calico-node-5gd8x" Jul 1 08:43:32.645097 kubelet[3174]: I0701 08:43:32.645038 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b-lib-modules\") pod \"calico-node-5gd8x\" (UID: \"c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b\") " pod="calico-system/calico-node-5gd8x" Jul 1 08:43:32.645247 kubelet[3174]: I0701 08:43:32.645161 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b-cni-log-dir\") pod \"calico-node-5gd8x\" (UID: \"c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b\") " pod="calico-system/calico-node-5gd8x" Jul 1 08:43:32.645247 kubelet[3174]: I0701 08:43:32.645174 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b-xtables-lock\") pod \"calico-node-5gd8x\" (UID: \"c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b\") " pod="calico-system/calico-node-5gd8x" Jul 1 08:43:32.645247 kubelet[3174]: I0701 08:43:32.645189 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrtql\" (UniqueName: \"kubernetes.io/projected/c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b-kube-api-access-zrtql\") pod \"calico-node-5gd8x\" (UID: \"c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b\") " pod="calico-system/calico-node-5gd8x" Jul 1 08:43:32.645400 kubelet[3174]: I0701 08:43:32.645371 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b-tigera-ca-bundle\") pod \"calico-node-5gd8x\" (UID: \"c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b\") " pod="calico-system/calico-node-5gd8x" Jul 1 08:43:32.656883 containerd[1721]: time="2025-07-01T08:43:32.656863695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9c5d66c57-v2vzg,Uid:a43cc819-ecfa-4410-9b06-a9948345cf82,Namespace:calico-system,Attempt:0,} returns sandbox id \"852cb48582ec0f9cb0750f4e3f48973cac3372b7ca56681bf947f2ab24b6f461\"" Jul 1 08:43:32.658238 containerd[1721]: time="2025-07-01T08:43:32.658174058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 1 08:43:32.749108 kubelet[3174]: E0701 08:43:32.749087 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.749258 kubelet[3174]: W0701 08:43:32.749199 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.749258 kubelet[3174]: E0701 08:43:32.749231 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.753530 kubelet[3174]: E0701 08:43:32.753477 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.753530 kubelet[3174]: W0701 08:43:32.753491 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.753530 kubelet[3174]: E0701 08:43:32.753506 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.762873 kubelet[3174]: E0701 08:43:32.762813 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.762873 kubelet[3174]: W0701 08:43:32.762830 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.762873 kubelet[3174]: E0701 08:43:32.762846 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.817160 kubelet[3174]: E0701 08:43:32.816825 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpd2x" podUID="921bd48b-6a52-4928-98d5-dc65a968d1c0" Jul 1 08:43:32.828170 kubelet[3174]: E0701 08:43:32.828157 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.828412 kubelet[3174]: W0701 08:43:32.828188 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.828412 kubelet[3174]: E0701 08:43:32.828201 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.828616 kubelet[3174]: E0701 08:43:32.828605 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.828908 kubelet[3174]: W0701 08:43:32.828895 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.829052 kubelet[3174]: E0701 08:43:32.828974 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.829347 kubelet[3174]: E0701 08:43:32.829336 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.829514 kubelet[3174]: W0701 08:43:32.829504 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.829575 kubelet[3174]: E0701 08:43:32.829556 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.830121 kubelet[3174]: E0701 08:43:32.829996 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.830340 kubelet[3174]: W0701 08:43:32.830204 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.830340 kubelet[3174]: E0701 08:43:32.830222 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.830923 kubelet[3174]: E0701 08:43:32.830797 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.830923 kubelet[3174]: W0701 08:43:32.830810 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.830923 kubelet[3174]: E0701 08:43:32.830823 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.831205 kubelet[3174]: E0701 08:43:32.831194 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.831376 kubelet[3174]: W0701 08:43:32.831268 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.831376 kubelet[3174]: E0701 08:43:32.831283 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.831604 kubelet[3174]: E0701 08:43:32.831563 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.831790 kubelet[3174]: W0701 08:43:32.831670 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.831790 kubelet[3174]: E0701 08:43:32.831692 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.832091 kubelet[3174]: E0701 08:43:32.832019 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.832304 kubelet[3174]: W0701 08:43:32.832145 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.832304 kubelet[3174]: E0701 08:43:32.832202 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.832669 kubelet[3174]: E0701 08:43:32.832647 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.832934 kubelet[3174]: W0701 08:43:32.832744 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.832934 kubelet[3174]: E0701 08:43:32.832781 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.833116 kubelet[3174]: E0701 08:43:32.833108 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.833196 kubelet[3174]: W0701 08:43:32.833140 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.833196 kubelet[3174]: E0701 08:43:32.833151 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.833497 kubelet[3174]: E0701 08:43:32.833464 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.833497 kubelet[3174]: W0701 08:43:32.833477 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.833497 kubelet[3174]: E0701 08:43:32.833488 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.833881 kubelet[3174]: E0701 08:43:32.833864 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.833957 kubelet[3174]: W0701 08:43:32.833881 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.833957 kubelet[3174]: E0701 08:43:32.833892 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.834025 kubelet[3174]: E0701 08:43:32.834016 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.834025 kubelet[3174]: W0701 08:43:32.834023 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.834071 kubelet[3174]: E0701 08:43:32.834031 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.834473 kubelet[3174]: E0701 08:43:32.834407 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.834473 kubelet[3174]: W0701 08:43:32.834421 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.834473 kubelet[3174]: E0701 08:43:32.834433 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.834577 kubelet[3174]: E0701 08:43:32.834533 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.834577 kubelet[3174]: W0701 08:43:32.834538 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.834577 kubelet[3174]: E0701 08:43:32.834545 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.834635 kubelet[3174]: E0701 08:43:32.834631 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.834654 kubelet[3174]: W0701 08:43:32.834636 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.834654 kubelet[3174]: E0701 08:43:32.834642 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.834862 kubelet[3174]: E0701 08:43:32.834736 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.834862 kubelet[3174]: W0701 08:43:32.834742 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.834862 kubelet[3174]: E0701 08:43:32.834773 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.834862 kubelet[3174]: E0701 08:43:32.834853 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.834862 kubelet[3174]: W0701 08:43:32.834859 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.834981 kubelet[3174]: E0701 08:43:32.834864 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.834981 kubelet[3174]: E0701 08:43:32.834939 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.834981 kubelet[3174]: W0701 08:43:32.834944 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.834981 kubelet[3174]: E0701 08:43:32.834949 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.835057 kubelet[3174]: E0701 08:43:32.835020 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.835057 kubelet[3174]: W0701 08:43:32.835024 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.835057 kubelet[3174]: E0701 08:43:32.835029 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.842601 containerd[1721]: time="2025-07-01T08:43:32.842576864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5gd8x,Uid:c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b,Namespace:calico-system,Attempt:0,}" Jul 1 08:43:32.847295 kubelet[3174]: E0701 08:43:32.847273 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.847452 kubelet[3174]: W0701 08:43:32.847375 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.847452 kubelet[3174]: E0701 08:43:32.847396 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.847452 kubelet[3174]: I0701 08:43:32.847419 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/921bd48b-6a52-4928-98d5-dc65a968d1c0-socket-dir\") pod \"csi-node-driver-cpd2x\" (UID: \"921bd48b-6a52-4928-98d5-dc65a968d1c0\") " pod="calico-system/csi-node-driver-cpd2x" Jul 1 08:43:32.847728 kubelet[3174]: E0701 08:43:32.847718 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.847808 kubelet[3174]: W0701 08:43:32.847769 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.847949 kubelet[3174]: E0701 08:43:32.847942 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.848063 kubelet[3174]: W0701 08:43:32.847982 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.848063 kubelet[3174]: E0701 08:43:32.847992 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.848063 kubelet[3174]: I0701 08:43:32.848011 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/921bd48b-6a52-4928-98d5-dc65a968d1c0-kubelet-dir\") pod \"csi-node-driver-cpd2x\" (UID: \"921bd48b-6a52-4928-98d5-dc65a968d1c0\") " pod="calico-system/csi-node-driver-cpd2x" Jul 1 08:43:32.848147 kubelet[3174]: E0701 08:43:32.848140 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.848311 kubelet[3174]: E0701 08:43:32.848292 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.848311 kubelet[3174]: W0701 08:43:32.848301 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.848435 kubelet[3174]: E0701 08:43:32.848371 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.848435 kubelet[3174]: I0701 08:43:32.848387 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/921bd48b-6a52-4928-98d5-dc65a968d1c0-registration-dir\") pod \"csi-node-driver-cpd2x\" (UID: \"921bd48b-6a52-4928-98d5-dc65a968d1c0\") " pod="calico-system/csi-node-driver-cpd2x" Jul 1 08:43:32.848640 kubelet[3174]: E0701 08:43:32.848610 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.848640 kubelet[3174]: W0701 08:43:32.848620 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.848640 kubelet[3174]: E0701 08:43:32.848630 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.848908 kubelet[3174]: E0701 08:43:32.848901 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.849006 kubelet[3174]: W0701 08:43:32.848970 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.849006 kubelet[3174]: E0701 08:43:32.848984 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.849171 kubelet[3174]: E0701 08:43:32.849165 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.849220 kubelet[3174]: W0701 08:43:32.849201 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.849274 kubelet[3174]: E0701 08:43:32.849249 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.849403 kubelet[3174]: E0701 08:43:32.849389 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.849403 kubelet[3174]: W0701 08:43:32.849396 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.849526 kubelet[3174]: E0701 08:43:32.849467 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.849604 kubelet[3174]: E0701 08:43:32.849583 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.849604 kubelet[3174]: W0701 08:43:32.849590 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.849683 kubelet[3174]: E0701 08:43:32.849657 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.849851 kubelet[3174]: E0701 08:43:32.849833 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.852737 kubelet[3174]: W0701 08:43:32.849874 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.852737 kubelet[3174]: E0701 08:43:32.849882 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.852737 kubelet[3174]: I0701 08:43:32.849912 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxvwc\" (UniqueName: \"kubernetes.io/projected/921bd48b-6a52-4928-98d5-dc65a968d1c0-kube-api-access-nxvwc\") pod \"csi-node-driver-cpd2x\" (UID: \"921bd48b-6a52-4928-98d5-dc65a968d1c0\") " pod="calico-system/csi-node-driver-cpd2x" Jul 1 08:43:32.852737 kubelet[3174]: E0701 08:43:32.850032 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.852737 kubelet[3174]: W0701 08:43:32.850037 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.852737 kubelet[3174]: E0701 08:43:32.850047 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.852737 kubelet[3174]: E0701 08:43:32.850193 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.852737 kubelet[3174]: W0701 08:43:32.850199 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.852737 kubelet[3174]: E0701 08:43:32.850223 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.853314 kubelet[3174]: E0701 08:43:32.850349 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.853314 kubelet[3174]: W0701 08:43:32.850355 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.853314 kubelet[3174]: E0701 08:43:32.850362 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.853314 kubelet[3174]: I0701 08:43:32.850392 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/921bd48b-6a52-4928-98d5-dc65a968d1c0-varrun\") pod \"csi-node-driver-cpd2x\" (UID: \"921bd48b-6a52-4928-98d5-dc65a968d1c0\") " pod="calico-system/csi-node-driver-cpd2x" Jul 1 08:43:32.853314 kubelet[3174]: E0701 08:43:32.850559 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.853314 kubelet[3174]: W0701 08:43:32.850566 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.853314 kubelet[3174]: E0701 08:43:32.850574 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.853314 kubelet[3174]: E0701 08:43:32.850668 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.853314 kubelet[3174]: W0701 08:43:32.850683 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.853833 kubelet[3174]: E0701 08:43:32.850689 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.890526 containerd[1721]: time="2025-07-01T08:43:32.890492392Z" level=info msg="connecting to shim 2ec7a2522a6c32b40cb90a620a8ab11c3335bde55fccdbe9961d35efecad1fd5" address="unix:///run/containerd/s/11e05ad58ac838010d229ce5bf5ed618998b4dd18909eb39e7c41b0e369e1918" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:43:32.907877 systemd[1]: Started cri-containerd-2ec7a2522a6c32b40cb90a620a8ab11c3335bde55fccdbe9961d35efecad1fd5.scope - libcontainer container 2ec7a2522a6c32b40cb90a620a8ab11c3335bde55fccdbe9961d35efecad1fd5. Jul 1 08:43:32.926470 containerd[1721]: time="2025-07-01T08:43:32.926448856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5gd8x,Uid:c9b1ae95-8c4c-43a3-9ff4-059c376d0e3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ec7a2522a6c32b40cb90a620a8ab11c3335bde55fccdbe9961d35efecad1fd5\"" Jul 1 08:43:32.950830 kubelet[3174]: E0701 08:43:32.950812 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.950830 kubelet[3174]: W0701 08:43:32.950826 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.950944 kubelet[3174]: E0701 08:43:32.950845 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.951006 kubelet[3174]: E0701 08:43:32.950992 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.951006 kubelet[3174]: W0701 08:43:32.951000 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.951092 kubelet[3174]: E0701 08:43:32.951012 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.951185 kubelet[3174]: E0701 08:43:32.951131 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.951185 kubelet[3174]: W0701 08:43:32.951141 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.951185 kubelet[3174]: E0701 08:43:32.951148 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.951283 kubelet[3174]: E0701 08:43:32.951245 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.951283 kubelet[3174]: W0701 08:43:32.951253 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.951283 kubelet[3174]: E0701 08:43:32.951265 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.951376 kubelet[3174]: E0701 08:43:32.951356 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.951376 kubelet[3174]: W0701 08:43:32.951361 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.951376 kubelet[3174]: E0701 08:43:32.951368 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.951488 kubelet[3174]: E0701 08:43:32.951461 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.951488 kubelet[3174]: W0701 08:43:32.951469 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.951488 kubelet[3174]: E0701 08:43:32.951482 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.951639 kubelet[3174]: E0701 08:43:32.951632 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.951668 kubelet[3174]: W0701 08:43:32.951639 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.951668 kubelet[3174]: E0701 08:43:32.951654 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.951780 kubelet[3174]: E0701 08:43:32.951769 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.951780 kubelet[3174]: W0701 08:43:32.951777 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.951866 kubelet[3174]: E0701 08:43:32.951787 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.951900 kubelet[3174]: E0701 08:43:32.951882 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.951900 kubelet[3174]: W0701 08:43:32.951887 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.951900 kubelet[3174]: E0701 08:43:32.951896 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.952025 kubelet[3174]: E0701 08:43:32.952018 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.952054 kubelet[3174]: W0701 08:43:32.952048 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.952142 kubelet[3174]: E0701 08:43:32.952060 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.952242 kubelet[3174]: E0701 08:43:32.952227 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.952242 kubelet[3174]: W0701 08:43:32.952237 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.952298 kubelet[3174]: E0701 08:43:32.952249 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.952431 kubelet[3174]: E0701 08:43:32.952418 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.952431 kubelet[3174]: W0701 08:43:32.952429 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.952555 kubelet[3174]: E0701 08:43:32.952468 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.952555 kubelet[3174]: E0701 08:43:32.952523 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.952555 kubelet[3174]: W0701 08:43:32.952529 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.952682 kubelet[3174]: E0701 08:43:32.952602 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.952682 kubelet[3174]: E0701 08:43:32.952637 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.952682 kubelet[3174]: W0701 08:43:32.952642 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.952682 kubelet[3174]: E0701 08:43:32.952658 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.952831 kubelet[3174]: E0701 08:43:32.952780 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.952831 kubelet[3174]: W0701 08:43:32.952785 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.952831 kubelet[3174]: E0701 08:43:32.952795 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.952955 kubelet[3174]: E0701 08:43:32.952884 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.952955 kubelet[3174]: W0701 08:43:32.952889 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.952955 kubelet[3174]: E0701 08:43:32.952901 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.953052 kubelet[3174]: E0701 08:43:32.952992 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.953052 kubelet[3174]: W0701 08:43:32.952997 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.953052 kubelet[3174]: E0701 08:43:32.953006 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.953163 kubelet[3174]: E0701 08:43:32.953132 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.953163 kubelet[3174]: W0701 08:43:32.953137 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.953230 kubelet[3174]: E0701 08:43:32.953217 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.953259 kubelet[3174]: E0701 08:43:32.953256 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.953284 kubelet[3174]: W0701 08:43:32.953261 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.953284 kubelet[3174]: E0701 08:43:32.953267 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.953488 kubelet[3174]: E0701 08:43:32.953424 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.953488 kubelet[3174]: W0701 08:43:32.953430 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.953488 kubelet[3174]: E0701 08:43:32.953438 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.953881 kubelet[3174]: E0701 08:43:32.953869 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.954247 kubelet[3174]: W0701 08:43:32.953882 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.954247 kubelet[3174]: E0701 08:43:32.953995 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.954247 kubelet[3174]: W0701 08:43:32.954000 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.954247 kubelet[3174]: E0701 08:43:32.954008 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.954247 kubelet[3174]: E0701 08:43:32.954121 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.954247 kubelet[3174]: W0701 08:43:32.954126 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.954247 kubelet[3174]: E0701 08:43:32.954133 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.954247 kubelet[3174]: E0701 08:43:32.954188 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.954444 kubelet[3174]: E0701 08:43:32.954280 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.954444 kubelet[3174]: W0701 08:43:32.954286 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.954444 kubelet[3174]: E0701 08:43:32.954294 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.954444 kubelet[3174]: E0701 08:43:32.954407 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.954444 kubelet[3174]: W0701 08:43:32.954427 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.954444 kubelet[3174]: E0701 08:43:32.954434 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:32.960017 kubelet[3174]: E0701 08:43:32.960001 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:32.960017 kubelet[3174]: W0701 08:43:32.960012 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:32.960102 kubelet[3174]: E0701 08:43:32.960023 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:34.143233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1549547285.mount: Deactivated successfully. Jul 1 08:43:34.240219 kubelet[3174]: E0701 08:43:34.239213 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpd2x" podUID="921bd48b-6a52-4928-98d5-dc65a968d1c0" Jul 1 08:43:34.775760 containerd[1721]: time="2025-07-01T08:43:34.775725586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:34.779061 containerd[1721]: time="2025-07-01T08:43:34.779024158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 1 08:43:34.781725 containerd[1721]: time="2025-07-01T08:43:34.781595720Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:34.786470 containerd[1721]: time="2025-07-01T08:43:34.786428412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:34.787077 containerd[1721]: time="2025-07-01T08:43:34.786808265Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.128598548s" Jul 1 08:43:34.787077 containerd[1721]: time="2025-07-01T08:43:34.786834927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 1 08:43:34.788082 containerd[1721]: time="2025-07-01T08:43:34.787568261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 1 08:43:34.799411 containerd[1721]: time="2025-07-01T08:43:34.799390768Z" level=info msg="CreateContainer within sandbox \"852cb48582ec0f9cb0750f4e3f48973cac3372b7ca56681bf947f2ab24b6f461\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 1 08:43:34.825251 containerd[1721]: time="2025-07-01T08:43:34.823870475Z" level=info msg="Container 3b11c51ccecfb78513f0bc540eb8a799443121dc48be3e5fe0857b460e65ae28: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:43:34.846474 containerd[1721]: time="2025-07-01T08:43:34.846441662Z" level=info msg="CreateContainer within sandbox \"852cb48582ec0f9cb0750f4e3f48973cac3372b7ca56681bf947f2ab24b6f461\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3b11c51ccecfb78513f0bc540eb8a799443121dc48be3e5fe0857b460e65ae28\"" Jul 1 08:43:34.847046 containerd[1721]: time="2025-07-01T08:43:34.847005557Z" level=info msg="StartContainer for \"3b11c51ccecfb78513f0bc540eb8a799443121dc48be3e5fe0857b460e65ae28\"" Jul 1 08:43:34.848594 containerd[1721]: time="2025-07-01T08:43:34.848571435Z" level=info msg="connecting to shim 3b11c51ccecfb78513f0bc540eb8a799443121dc48be3e5fe0857b460e65ae28" address="unix:///run/containerd/s/e90bcc19577056741df5d9f252b6742325334dc46a8599846f268ebcbefc306b" protocol=ttrpc version=3 Jul 1 08:43:34.874881 systemd[1]: Started cri-containerd-3b11c51ccecfb78513f0bc540eb8a799443121dc48be3e5fe0857b460e65ae28.scope - libcontainer container 3b11c51ccecfb78513f0bc540eb8a799443121dc48be3e5fe0857b460e65ae28. Jul 1 08:43:34.913312 containerd[1721]: time="2025-07-01T08:43:34.913294925Z" level=info msg="StartContainer for \"3b11c51ccecfb78513f0bc540eb8a799443121dc48be3e5fe0857b460e65ae28\" returns successfully" Jul 1 08:43:35.350050 kubelet[3174]: E0701 08:43:35.350024 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.350050 kubelet[3174]: W0701 08:43:35.350042 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.350544 kubelet[3174]: E0701 08:43:35.350063 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.350544 kubelet[3174]: E0701 08:43:35.350162 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.350544 kubelet[3174]: W0701 08:43:35.350168 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.350544 kubelet[3174]: E0701 08:43:35.350175 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.350544 kubelet[3174]: E0701 08:43:35.350253 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.350544 kubelet[3174]: W0701 08:43:35.350257 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.350544 kubelet[3174]: E0701 08:43:35.350263 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.350544 kubelet[3174]: E0701 08:43:35.350372 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.350544 kubelet[3174]: W0701 08:43:35.350377 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.350544 kubelet[3174]: E0701 08:43:35.350382 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.350851 kubelet[3174]: E0701 08:43:35.350460 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.350851 kubelet[3174]: W0701 08:43:35.350464 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.350851 kubelet[3174]: E0701 08:43:35.350469 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.350851 kubelet[3174]: E0701 08:43:35.350539 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.350851 kubelet[3174]: W0701 08:43:35.350543 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.350851 kubelet[3174]: E0701 08:43:35.350548 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.350851 kubelet[3174]: E0701 08:43:35.350618 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.350851 kubelet[3174]: W0701 08:43:35.350622 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.350851 kubelet[3174]: E0701 08:43:35.350628 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.350851 kubelet[3174]: E0701 08:43:35.350695 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.351104 kubelet[3174]: W0701 08:43:35.350699 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.351104 kubelet[3174]: E0701 08:43:35.350705 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.351104 kubelet[3174]: E0701 08:43:35.350814 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.351104 kubelet[3174]: W0701 08:43:35.350819 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.351104 kubelet[3174]: E0701 08:43:35.350825 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.351104 kubelet[3174]: E0701 08:43:35.350896 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.351104 kubelet[3174]: W0701 08:43:35.350900 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.351104 kubelet[3174]: E0701 08:43:35.350905 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.351104 kubelet[3174]: E0701 08:43:35.350971 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.351104 kubelet[3174]: W0701 08:43:35.350975 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.351359 kubelet[3174]: E0701 08:43:35.350980 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.351359 kubelet[3174]: E0701 08:43:35.351046 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.351359 kubelet[3174]: W0701 08:43:35.351050 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.351359 kubelet[3174]: E0701 08:43:35.351056 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.351359 kubelet[3174]: E0701 08:43:35.351124 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.351359 kubelet[3174]: W0701 08:43:35.351128 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.351359 kubelet[3174]: E0701 08:43:35.351133 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.351359 kubelet[3174]: E0701 08:43:35.351203 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.351359 kubelet[3174]: W0701 08:43:35.351207 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.351359 kubelet[3174]: E0701 08:43:35.351212 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.351514 kubelet[3174]: E0701 08:43:35.351277 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.351514 kubelet[3174]: W0701 08:43:35.351281 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.351514 kubelet[3174]: E0701 08:43:35.351286 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.368594 kubelet[3174]: E0701 08:43:35.368571 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.368594 kubelet[3174]: W0701 08:43:35.368586 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.368594 kubelet[3174]: E0701 08:43:35.368602 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.368793 kubelet[3174]: E0701 08:43:35.368717 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.368793 kubelet[3174]: W0701 08:43:35.368722 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.368793 kubelet[3174]: E0701 08:43:35.368730 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.368902 kubelet[3174]: E0701 08:43:35.368878 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.368902 kubelet[3174]: W0701 08:43:35.368898 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.368960 kubelet[3174]: E0701 08:43:35.368911 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.369074 kubelet[3174]: E0701 08:43:35.369067 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.369102 kubelet[3174]: W0701 08:43:35.369074 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.369102 kubelet[3174]: E0701 08:43:35.369087 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.369184 kubelet[3174]: E0701 08:43:35.369175 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.369184 kubelet[3174]: W0701 08:43:35.369181 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.369252 kubelet[3174]: E0701 08:43:35.369188 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.369304 kubelet[3174]: E0701 08:43:35.369259 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.369304 kubelet[3174]: W0701 08:43:35.369263 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.369304 kubelet[3174]: E0701 08:43:35.369269 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.369375 kubelet[3174]: E0701 08:43:35.369366 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.369399 kubelet[3174]: W0701 08:43:35.369374 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.369399 kubelet[3174]: E0701 08:43:35.369385 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.369496 kubelet[3174]: E0701 08:43:35.369483 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.369496 kubelet[3174]: W0701 08:43:35.369492 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.369553 kubelet[3174]: E0701 08:43:35.369499 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.369593 kubelet[3174]: E0701 08:43:35.369574 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.369593 kubelet[3174]: W0701 08:43:35.369579 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.369593 kubelet[3174]: E0701 08:43:35.369587 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.369698 kubelet[3174]: E0701 08:43:35.369689 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.369698 kubelet[3174]: W0701 08:43:35.369696 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.369741 kubelet[3174]: E0701 08:43:35.369702 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.369916 kubelet[3174]: E0701 08:43:35.369824 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.369916 kubelet[3174]: W0701 08:43:35.369830 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.369916 kubelet[3174]: E0701 08:43:35.369849 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.370009 kubelet[3174]: E0701 08:43:35.369926 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.370009 kubelet[3174]: W0701 08:43:35.369930 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.370009 kubelet[3174]: E0701 08:43:35.369944 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.370076 kubelet[3174]: E0701 08:43:35.370024 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.370076 kubelet[3174]: W0701 08:43:35.370029 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.370076 kubelet[3174]: E0701 08:43:35.370040 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.370145 kubelet[3174]: E0701 08:43:35.370137 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.370145 kubelet[3174]: W0701 08:43:35.370141 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.370188 kubelet[3174]: E0701 08:43:35.370151 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.370275 kubelet[3174]: E0701 08:43:35.370266 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.370302 kubelet[3174]: W0701 08:43:35.370276 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.370302 kubelet[3174]: E0701 08:43:35.370282 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.370539 kubelet[3174]: E0701 08:43:35.370375 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.370539 kubelet[3174]: W0701 08:43:35.370380 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.370539 kubelet[3174]: E0701 08:43:35.370386 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.370631 kubelet[3174]: E0701 08:43:35.370598 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.370631 kubelet[3174]: W0701 08:43:35.370606 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.370631 kubelet[3174]: E0701 08:43:35.370618 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:35.370718 kubelet[3174]: E0701 08:43:35.370713 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:35.370739 kubelet[3174]: W0701 08:43:35.370719 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:35.370739 kubelet[3174]: E0701 08:43:35.370724 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.241073 kubelet[3174]: E0701 08:43:36.241030 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpd2x" podUID="921bd48b-6a52-4928-98d5-dc65a968d1c0" Jul 1 08:43:36.293901 kubelet[3174]: I0701 08:43:36.293879 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:43:36.359909 kubelet[3174]: E0701 08:43:36.359886 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.359909 kubelet[3174]: W0701 08:43:36.359902 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360227 kubelet[3174]: E0701 08:43:36.359917 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.360227 kubelet[3174]: E0701 08:43:36.360013 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.360227 kubelet[3174]: W0701 08:43:36.360018 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360227 kubelet[3174]: E0701 08:43:36.360024 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.360227 kubelet[3174]: E0701 08:43:36.360102 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.360227 kubelet[3174]: W0701 08:43:36.360107 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360227 kubelet[3174]: E0701 08:43:36.360113 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.360227 kubelet[3174]: E0701 08:43:36.360181 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.360227 kubelet[3174]: W0701 08:43:36.360185 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360227 kubelet[3174]: E0701 08:43:36.360189 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.360437 kubelet[3174]: E0701 08:43:36.360264 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.360437 kubelet[3174]: W0701 08:43:36.360269 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360437 kubelet[3174]: E0701 08:43:36.360275 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.360437 kubelet[3174]: E0701 08:43:36.360336 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.360437 kubelet[3174]: W0701 08:43:36.360341 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360437 kubelet[3174]: E0701 08:43:36.360346 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.360437 kubelet[3174]: E0701 08:43:36.360407 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.360437 kubelet[3174]: W0701 08:43:36.360412 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360437 kubelet[3174]: E0701 08:43:36.360417 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.360614 kubelet[3174]: E0701 08:43:36.360478 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.360614 kubelet[3174]: W0701 08:43:36.360482 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360614 kubelet[3174]: E0701 08:43:36.360487 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.360614 kubelet[3174]: E0701 08:43:36.360556 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.360614 kubelet[3174]: W0701 08:43:36.360560 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360614 kubelet[3174]: E0701 08:43:36.360565 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.360732 kubelet[3174]: E0701 08:43:36.360626 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.360732 kubelet[3174]: W0701 08:43:36.360630 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360732 kubelet[3174]: E0701 08:43:36.360635 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.360732 kubelet[3174]: E0701 08:43:36.360697 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.360732 kubelet[3174]: W0701 08:43:36.360701 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360732 kubelet[3174]: E0701 08:43:36.360706 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.360876 kubelet[3174]: E0701 08:43:36.360801 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.360876 kubelet[3174]: W0701 08:43:36.360805 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360876 kubelet[3174]: E0701 08:43:36.360812 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.360938 kubelet[3174]: E0701 08:43:36.360884 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.360938 kubelet[3174]: W0701 08:43:36.360888 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360938 kubelet[3174]: E0701 08:43:36.360894 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.360995 kubelet[3174]: E0701 08:43:36.360959 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.360995 kubelet[3174]: W0701 08:43:36.360963 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.360995 kubelet[3174]: E0701 08:43:36.360969 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.361058 kubelet[3174]: E0701 08:43:36.361034 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.361058 kubelet[3174]: W0701 08:43:36.361038 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.361058 kubelet[3174]: E0701 08:43:36.361043 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.376029 kubelet[3174]: E0701 08:43:36.376013 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.376029 kubelet[3174]: W0701 08:43:36.376027 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.376131 kubelet[3174]: E0701 08:43:36.376040 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.376164 kubelet[3174]: E0701 08:43:36.376158 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.376186 kubelet[3174]: W0701 08:43:36.376165 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.376186 kubelet[3174]: E0701 08:43:36.376172 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.376328 kubelet[3174]: E0701 08:43:36.376314 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.376328 kubelet[3174]: W0701 08:43:36.376326 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.376379 kubelet[3174]: E0701 08:43:36.376340 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.376468 kubelet[3174]: E0701 08:43:36.376462 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.376498 kubelet[3174]: W0701 08:43:36.376469 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.376498 kubelet[3174]: E0701 08:43:36.376485 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.376608 kubelet[3174]: E0701 08:43:36.376587 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.376608 kubelet[3174]: W0701 08:43:36.376607 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.376658 kubelet[3174]: E0701 08:43:36.376615 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.376741 kubelet[3174]: E0701 08:43:36.376715 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.376741 kubelet[3174]: W0701 08:43:36.376738 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.376805 kubelet[3174]: E0701 08:43:36.376759 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.376939 kubelet[3174]: E0701 08:43:36.376914 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.376939 kubelet[3174]: W0701 08:43:36.376936 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.376998 kubelet[3174]: E0701 08:43:36.376947 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.377069 kubelet[3174]: E0701 08:43:36.377057 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.377069 kubelet[3174]: W0701 08:43:36.377065 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.377129 kubelet[3174]: E0701 08:43:36.377077 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.377215 kubelet[3174]: E0701 08:43:36.377194 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.377215 kubelet[3174]: W0701 08:43:36.377212 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.377270 kubelet[3174]: E0701 08:43:36.377226 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.377344 kubelet[3174]: E0701 08:43:36.377322 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.377344 kubelet[3174]: W0701 08:43:36.377341 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.377414 kubelet[3174]: E0701 08:43:36.377369 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.377462 kubelet[3174]: E0701 08:43:36.377417 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.377462 kubelet[3174]: W0701 08:43:36.377421 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.377462 kubelet[3174]: E0701 08:43:36.377434 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.377534 kubelet[3174]: E0701 08:43:36.377494 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.377534 kubelet[3174]: W0701 08:43:36.377499 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.377534 kubelet[3174]: E0701 08:43:36.377507 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.377613 kubelet[3174]: E0701 08:43:36.377609 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.377640 kubelet[3174]: W0701 08:43:36.377614 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.377640 kubelet[3174]: E0701 08:43:36.377622 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.377791 kubelet[3174]: E0701 08:43:36.377782 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.377823 kubelet[3174]: W0701 08:43:36.377797 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.377823 kubelet[3174]: E0701 08:43:36.377808 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.377885 kubelet[3174]: E0701 08:43:36.377882 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.377911 kubelet[3174]: W0701 08:43:36.377886 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.377911 kubelet[3174]: E0701 08:43:36.377893 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.378067 kubelet[3174]: E0701 08:43:36.378043 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.378067 kubelet[3174]: W0701 08:43:36.378065 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.378117 kubelet[3174]: E0701 08:43:36.378075 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.378427 kubelet[3174]: E0701 08:43:36.378363 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.378427 kubelet[3174]: W0701 08:43:36.378379 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.378427 kubelet[3174]: E0701 08:43:36.378395 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:36.378801 kubelet[3174]: E0701 08:43:36.378781 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:43:36.378801 kubelet[3174]: W0701 08:43:36.378799 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:43:36.378880 kubelet[3174]: E0701 08:43:36.378810 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:43:38.239094 kubelet[3174]: E0701 08:43:38.238792 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpd2x" podUID="921bd48b-6a52-4928-98d5-dc65a968d1c0" Jul 1 08:43:39.928800 containerd[1721]: time="2025-07-01T08:43:39.928761772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:39.930926 containerd[1721]: time="2025-07-01T08:43:39.930888678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 1 08:43:39.934184 containerd[1721]: time="2025-07-01T08:43:39.934145613Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:39.937684 containerd[1721]: time="2025-07-01T08:43:39.937643913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:39.938239 containerd[1721]: time="2025-07-01T08:43:39.937971481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 5.150374691s" Jul 1 08:43:39.938239 containerd[1721]: time="2025-07-01T08:43:39.937997548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 1 08:43:39.939661 containerd[1721]: time="2025-07-01T08:43:39.939640042Z" level=info msg="CreateContainer within sandbox \"2ec7a2522a6c32b40cb90a620a8ab11c3335bde55fccdbe9961d35efecad1fd5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 1 08:43:39.955474 containerd[1721]: time="2025-07-01T08:43:39.955447414Z" level=info msg="Container de1532453a8d7a9297c8e794f46a80f56ee883c2c94b9b33c01f418040040740: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:43:39.973411 containerd[1721]: time="2025-07-01T08:43:39.973388606Z" level=info msg="CreateContainer within sandbox \"2ec7a2522a6c32b40cb90a620a8ab11c3335bde55fccdbe9961d35efecad1fd5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"de1532453a8d7a9297c8e794f46a80f56ee883c2c94b9b33c01f418040040740\"" Jul 1 08:43:39.973698 containerd[1721]: time="2025-07-01T08:43:39.973677621Z" level=info msg="StartContainer for \"de1532453a8d7a9297c8e794f46a80f56ee883c2c94b9b33c01f418040040740\"" Jul 1 08:43:39.975113 containerd[1721]: time="2025-07-01T08:43:39.975084646Z" level=info msg="connecting to shim de1532453a8d7a9297c8e794f46a80f56ee883c2c94b9b33c01f418040040740" address="unix:///run/containerd/s/11e05ad58ac838010d229ce5bf5ed618998b4dd18909eb39e7c41b0e369e1918" protocol=ttrpc version=3 Jul 1 08:43:39.993890 systemd[1]: Started cri-containerd-de1532453a8d7a9297c8e794f46a80f56ee883c2c94b9b33c01f418040040740.scope - libcontainer container de1532453a8d7a9297c8e794f46a80f56ee883c2c94b9b33c01f418040040740. Jul 1 08:43:40.023386 containerd[1721]: time="2025-07-01T08:43:40.023327483Z" level=info msg="StartContainer for \"de1532453a8d7a9297c8e794f46a80f56ee883c2c94b9b33c01f418040040740\" returns successfully" Jul 1 08:43:40.026364 systemd[1]: cri-containerd-de1532453a8d7a9297c8e794f46a80f56ee883c2c94b9b33c01f418040040740.scope: Deactivated successfully. Jul 1 08:43:40.028340 containerd[1721]: time="2025-07-01T08:43:40.028315503Z" level=info msg="received exit event container_id:\"de1532453a8d7a9297c8e794f46a80f56ee883c2c94b9b33c01f418040040740\" id:\"de1532453a8d7a9297c8e794f46a80f56ee883c2c94b9b33c01f418040040740\" pid:3872 exited_at:{seconds:1751359420 nanos:28022679}" Jul 1 08:43:40.028505 containerd[1721]: time="2025-07-01T08:43:40.028356152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de1532453a8d7a9297c8e794f46a80f56ee883c2c94b9b33c01f418040040740\" id:\"de1532453a8d7a9297c8e794f46a80f56ee883c2c94b9b33c01f418040040740\" pid:3872 exited_at:{seconds:1751359420 nanos:28022679}" Jul 1 08:43:40.042985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de1532453a8d7a9297c8e794f46a80f56ee883c2c94b9b33c01f418040040740-rootfs.mount: Deactivated successfully. Jul 1 08:43:40.239634 kubelet[3174]: E0701 08:43:40.238734 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpd2x" podUID="921bd48b-6a52-4928-98d5-dc65a968d1c0" Jul 1 08:43:40.315534 kubelet[3174]: I0701 08:43:40.315275 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9c5d66c57-v2vzg" podStartSLOduration=6.185499727 podStartE2EDuration="8.315259371s" podCreationTimestamp="2025-07-01 08:43:32 +0000 UTC" firstStartedPulling="2025-07-01 08:43:32.657654478 +0000 UTC m=+18.503265561" lastFinishedPulling="2025-07-01 08:43:34.78741412 +0000 UTC m=+20.633025205" observedRunningTime="2025-07-01 08:43:35.302134531 +0000 UTC m=+21.147745632" watchObservedRunningTime="2025-07-01 08:43:40.315259371 +0000 UTC m=+26.160870490" Jul 1 08:43:42.239612 kubelet[3174]: E0701 08:43:42.238732 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpd2x" podUID="921bd48b-6a52-4928-98d5-dc65a968d1c0" Jul 1 08:43:42.306906 containerd[1721]: time="2025-07-01T08:43:42.306794131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 1 08:43:44.240586 kubelet[3174]: E0701 08:43:44.240542 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpd2x" podUID="921bd48b-6a52-4928-98d5-dc65a968d1c0" Jul 1 08:43:45.047376 containerd[1721]: time="2025-07-01T08:43:45.047337883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:45.051321 containerd[1721]: time="2025-07-01T08:43:45.051280530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 1 08:43:45.055440 containerd[1721]: time="2025-07-01T08:43:45.054683170Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:45.058783 containerd[1721]: time="2025-07-01T08:43:45.058746664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:45.059167 containerd[1721]: time="2025-07-01T08:43:45.059147081Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.752318038s" Jul 1 08:43:45.059215 containerd[1721]: time="2025-07-01T08:43:45.059175166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 1 08:43:45.060973 containerd[1721]: time="2025-07-01T08:43:45.060951962Z" level=info msg="CreateContainer within sandbox \"2ec7a2522a6c32b40cb90a620a8ab11c3335bde55fccdbe9961d35efecad1fd5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 1 08:43:45.094770 containerd[1721]: time="2025-07-01T08:43:45.092953359Z" level=info msg="Container 5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:43:45.111650 containerd[1721]: time="2025-07-01T08:43:45.111627357Z" level=info msg="CreateContainer within sandbox \"2ec7a2522a6c32b40cb90a620a8ab11c3335bde55fccdbe9961d35efecad1fd5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b\"" Jul 1 08:43:45.112089 containerd[1721]: time="2025-07-01T08:43:45.112001058Z" level=info msg="StartContainer for \"5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b\"" Jul 1 08:43:45.113535 containerd[1721]: time="2025-07-01T08:43:45.113497894Z" level=info msg="connecting to shim 5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b" address="unix:///run/containerd/s/11e05ad58ac838010d229ce5bf5ed618998b4dd18909eb39e7c41b0e369e1918" protocol=ttrpc version=3 Jul 1 08:43:45.130904 systemd[1]: Started cri-containerd-5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b.scope - libcontainer container 5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b. Jul 1 08:43:45.159145 containerd[1721]: time="2025-07-01T08:43:45.159104861Z" level=info msg="StartContainer for \"5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b\" returns successfully" Jul 1 08:43:45.212034 kubelet[3174]: I0701 08:43:45.212008 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:43:46.242060 kubelet[3174]: E0701 08:43:46.241581 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cpd2x" podUID="921bd48b-6a52-4928-98d5-dc65a968d1c0" Jul 1 08:43:46.381576 containerd[1721]: time="2025-07-01T08:43:46.381539280Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 1 08:43:46.383330 systemd[1]: cri-containerd-5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b.scope: Deactivated successfully. Jul 1 08:43:46.383923 systemd[1]: cri-containerd-5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b.scope: Consumed 397ms CPU time, 195.2M memory peak, 171.2M written to disk. Jul 1 08:43:46.385257 containerd[1721]: time="2025-07-01T08:43:46.385231156Z" level=info msg="received exit event container_id:\"5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b\" id:\"5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b\" pid:3931 exited_at:{seconds:1751359426 nanos:385074244}" Jul 1 08:43:46.385481 containerd[1721]: time="2025-07-01T08:43:46.385463605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b\" id:\"5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b\" pid:3931 exited_at:{seconds:1751359426 nanos:385074244}" Jul 1 08:43:46.402973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cdc9ca9807164ed38e804da0f7b4d3fd2814bfbaf5925e7dbe3eb1adb27a22b-rootfs.mount: Deactivated successfully. Jul 1 08:43:46.460935 kubelet[3174]: I0701 08:43:46.460853 3174 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 1 08:43:46.537168 systemd[1]: Created slice kubepods-burstable-pod90063c43_e610_4281_a138_4d72595d6f99.slice - libcontainer container kubepods-burstable-pod90063c43_e610_4281_a138_4d72595d6f99.slice. Jul 1 08:43:46.554503 systemd[1]: Created slice kubepods-besteffort-pod20d55f92_2563_4989_a7d2_36b21d3eab8a.slice - libcontainer container kubepods-besteffort-pod20d55f92_2563_4989_a7d2_36b21d3eab8a.slice. Jul 1 08:43:46.558165 kubelet[3174]: W0701 08:43:46.558132 3174 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-9999.9.9-s-875ad0e937" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-9999.9.9-s-875ad0e937' and this object Jul 1 08:43:46.558248 kubelet[3174]: E0701 08:43:46.558171 3174 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-9999.9.9-s-875ad0e937\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-9999.9.9-s-875ad0e937' and this object" logger="UnhandledError" Jul 1 08:43:46.558248 kubelet[3174]: W0701 08:43:46.558208 3174 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-9999.9.9-s-875ad0e937" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-9999.9.9-s-875ad0e937' and this object Jul 1 08:43:46.558248 kubelet[3174]: E0701 08:43:46.558217 3174 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-9999.9.9-s-875ad0e937\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-9999.9.9-s-875ad0e937' and this object" logger="UnhandledError" Jul 1 08:43:46.567167 systemd[1]: Created slice kubepods-besteffort-podda129cb9_8f64_41b6_a573_9988f86280fc.slice - libcontainer container kubepods-besteffort-podda129cb9_8f64_41b6_a573_9988f86280fc.slice. Jul 1 08:43:46.571449 systemd[1]: Created slice kubepods-burstable-podcdf46a22_d014_4b57_97a1_328bba56303c.slice - libcontainer container kubepods-burstable-podcdf46a22_d014_4b57_97a1_328bba56303c.slice. Jul 1 08:43:46.577320 systemd[1]: Created slice kubepods-besteffort-pod252bf1bb_5284_4a70_8a20_be1644a678b7.slice - libcontainer container kubepods-besteffort-pod252bf1bb_5284_4a70_8a20_be1644a678b7.slice. Jul 1 08:43:46.585785 systemd[1]: Created slice kubepods-besteffort-pod6eeb1842_9b66_4bc5_aa6a_64536984e255.slice - libcontainer container kubepods-besteffort-pod6eeb1842_9b66_4bc5_aa6a_64536984e255.slice. Jul 1 08:43:46.588836 systemd[1]: Created slice kubepods-besteffort-podad4e2201_5a2c_4e2f_b6d1_01bb7791ddb2.slice - libcontainer container kubepods-besteffort-podad4e2201_5a2c_4e2f_b6d1_01bb7791ddb2.slice. Jul 1 08:43:46.645174 kubelet[3174]: I0701 08:43:46.645149 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc2hk\" (UniqueName: \"kubernetes.io/projected/20d55f92-2563-4989-a7d2-36b21d3eab8a-kube-api-access-lc2hk\") pod \"calico-apiserver-794ddc87fb-xsjw6\" (UID: \"20d55f92-2563-4989-a7d2-36b21d3eab8a\") " pod="calico-apiserver/calico-apiserver-794ddc87fb-xsjw6" Jul 1 08:43:46.645342 kubelet[3174]: I0701 08:43:46.645180 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-td68f\" (UID: \"ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2\") " pod="calico-system/goldmane-768f4c5c69-td68f" Jul 1 08:43:46.645342 kubelet[3174]: I0701 08:43:46.645198 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/252bf1bb-5284-4a70-8a20-be1644a678b7-calico-apiserver-certs\") pod \"calico-apiserver-794ddc87fb-4c49p\" (UID: \"252bf1bb-5284-4a70-8a20-be1644a678b7\") " pod="calico-apiserver/calico-apiserver-794ddc87fb-4c49p" Jul 1 08:43:46.645342 kubelet[3174]: I0701 08:43:46.645215 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd695\" (UniqueName: \"kubernetes.io/projected/252bf1bb-5284-4a70-8a20-be1644a678b7-kube-api-access-rd695\") pod \"calico-apiserver-794ddc87fb-4c49p\" (UID: \"252bf1bb-5284-4a70-8a20-be1644a678b7\") " pod="calico-apiserver/calico-apiserver-794ddc87fb-4c49p" Jul 1 08:43:46.645342 kubelet[3174]: I0701 08:43:46.645233 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdf46a22-d014-4b57-97a1-328bba56303c-config-volume\") pod \"coredns-668d6bf9bc-l7f2m\" (UID: \"cdf46a22-d014-4b57-97a1-328bba56303c\") " pod="kube-system/coredns-668d6bf9bc-l7f2m" Jul 1 08:43:46.645342 kubelet[3174]: I0701 08:43:46.645249 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xcwx\" (UniqueName: \"kubernetes.io/projected/da129cb9-8f64-41b6-a573-9988f86280fc-kube-api-access-9xcwx\") pod \"calico-kube-controllers-cbf49894c-dqk7k\" (UID: \"da129cb9-8f64-41b6-a573-9988f86280fc\") " pod="calico-system/calico-kube-controllers-cbf49894c-dqk7k" Jul 1 08:43:46.645471 kubelet[3174]: I0701 08:43:46.645267 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2-config\") pod \"goldmane-768f4c5c69-td68f\" (UID: \"ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2\") " pod="calico-system/goldmane-768f4c5c69-td68f" Jul 1 08:43:46.645471 kubelet[3174]: I0701 08:43:46.645282 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90063c43-e610-4281-a138-4d72595d6f99-config-volume\") pod \"coredns-668d6bf9bc-cs2cr\" (UID: \"90063c43-e610-4281-a138-4d72595d6f99\") " pod="kube-system/coredns-668d6bf9bc-cs2cr" Jul 1 08:43:46.645471 kubelet[3174]: I0701 08:43:46.645299 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xzvn\" (UniqueName: \"kubernetes.io/projected/90063c43-e610-4281-a138-4d72595d6f99-kube-api-access-6xzvn\") pod \"coredns-668d6bf9bc-cs2cr\" (UID: \"90063c43-e610-4281-a138-4d72595d6f99\") " pod="kube-system/coredns-668d6bf9bc-cs2cr" Jul 1 08:43:46.645471 kubelet[3174]: I0701 08:43:46.645335 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eeb1842-9b66-4bc5-aa6a-64536984e255-whisker-ca-bundle\") pod \"whisker-56897d68d6-r6gjf\" (UID: \"6eeb1842-9b66-4bc5-aa6a-64536984e255\") " pod="calico-system/whisker-56897d68d6-r6gjf" Jul 1 08:43:46.645471 kubelet[3174]: I0701 08:43:46.645369 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2-goldmane-key-pair\") pod \"goldmane-768f4c5c69-td68f\" (UID: \"ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2\") " pod="calico-system/goldmane-768f4c5c69-td68f" Jul 1 08:43:46.645576 kubelet[3174]: I0701 08:43:46.645400 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97ql\" (UniqueName: \"kubernetes.io/projected/cdf46a22-d014-4b57-97a1-328bba56303c-kube-api-access-s97ql\") pod \"coredns-668d6bf9bc-l7f2m\" (UID: \"cdf46a22-d014-4b57-97a1-328bba56303c\") " pod="kube-system/coredns-668d6bf9bc-l7f2m" Jul 1 08:43:46.645576 kubelet[3174]: I0701 08:43:46.645423 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da129cb9-8f64-41b6-a573-9988f86280fc-tigera-ca-bundle\") pod \"calico-kube-controllers-cbf49894c-dqk7k\" (UID: \"da129cb9-8f64-41b6-a573-9988f86280fc\") " pod="calico-system/calico-kube-controllers-cbf49894c-dqk7k" Jul 1 08:43:46.645576 kubelet[3174]: I0701 08:43:46.645451 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6eeb1842-9b66-4bc5-aa6a-64536984e255-whisker-backend-key-pair\") pod \"whisker-56897d68d6-r6gjf\" (UID: \"6eeb1842-9b66-4bc5-aa6a-64536984e255\") " pod="calico-system/whisker-56897d68d6-r6gjf" Jul 1 08:43:46.645576 kubelet[3174]: I0701 08:43:46.645480 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc9tk\" (UniqueName: \"kubernetes.io/projected/6eeb1842-9b66-4bc5-aa6a-64536984e255-kube-api-access-mc9tk\") pod \"whisker-56897d68d6-r6gjf\" (UID: \"6eeb1842-9b66-4bc5-aa6a-64536984e255\") " pod="calico-system/whisker-56897d68d6-r6gjf" Jul 1 08:43:46.645576 kubelet[3174]: I0701 08:43:46.645499 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/20d55f92-2563-4989-a7d2-36b21d3eab8a-calico-apiserver-certs\") pod \"calico-apiserver-794ddc87fb-xsjw6\" (UID: \"20d55f92-2563-4989-a7d2-36b21d3eab8a\") " pod="calico-apiserver/calico-apiserver-794ddc87fb-xsjw6" Jul 1 08:43:46.645652 kubelet[3174]: I0701 08:43:46.645523 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkmv5\" (UniqueName: \"kubernetes.io/projected/ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2-kube-api-access-fkmv5\") pod \"goldmane-768f4c5c69-td68f\" (UID: \"ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2\") " pod="calico-system/goldmane-768f4c5c69-td68f" Jul 1 08:43:46.839949 containerd[1721]: time="2025-07-01T08:43:46.839881526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cs2cr,Uid:90063c43-e610-4281-a138-4d72595d6f99,Namespace:kube-system,Attempt:0,}" Jul 1 08:43:46.874744 containerd[1721]: time="2025-07-01T08:43:46.874502692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l7f2m,Uid:cdf46a22-d014-4b57-97a1-328bba56303c,Namespace:kube-system,Attempt:0,}" Jul 1 08:43:46.874744 containerd[1721]: time="2025-07-01T08:43:46.874655362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cbf49894c-dqk7k,Uid:da129cb9-8f64-41b6-a573-9988f86280fc,Namespace:calico-system,Attempt:0,}" Jul 1 08:43:46.887737 containerd[1721]: time="2025-07-01T08:43:46.887713320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56897d68d6-r6gjf,Uid:6eeb1842-9b66-4bc5-aa6a-64536984e255,Namespace:calico-system,Attempt:0,}" Jul 1 08:43:46.891394 containerd[1721]: time="2025-07-01T08:43:46.891373324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-td68f,Uid:ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2,Namespace:calico-system,Attempt:0,}" Jul 1 08:43:47.327639 containerd[1721]: time="2025-07-01T08:43:47.327524064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 1 08:43:47.424954 containerd[1721]: time="2025-07-01T08:43:47.424500849Z" level=error msg="Failed to destroy network for sandbox \"8bce73aeb9e85063b6d439c1ed8621aae5ce2ca5c33ff958d2604d46704ad8da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.427343 systemd[1]: run-netns-cni\x2dbaf02f40\x2d7306\x2de111\x2db2ec\x2d3944293dfacc.mount: Deactivated successfully. Jul 1 08:43:47.434056 containerd[1721]: time="2025-07-01T08:43:47.433587091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cs2cr,Uid:90063c43-e610-4281-a138-4d72595d6f99,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bce73aeb9e85063b6d439c1ed8621aae5ce2ca5c33ff958d2604d46704ad8da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.434780 kubelet[3174]: E0701 08:43:47.433738 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bce73aeb9e85063b6d439c1ed8621aae5ce2ca5c33ff958d2604d46704ad8da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.434780 kubelet[3174]: E0701 08:43:47.433815 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bce73aeb9e85063b6d439c1ed8621aae5ce2ca5c33ff958d2604d46704ad8da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cs2cr" Jul 1 08:43:47.434780 kubelet[3174]: E0701 08:43:47.433834 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bce73aeb9e85063b6d439c1ed8621aae5ce2ca5c33ff958d2604d46704ad8da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cs2cr" Jul 1 08:43:47.435086 kubelet[3174]: E0701 08:43:47.433871 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cs2cr_kube-system(90063c43-e610-4281-a138-4d72595d6f99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cs2cr_kube-system(90063c43-e610-4281-a138-4d72595d6f99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bce73aeb9e85063b6d439c1ed8621aae5ce2ca5c33ff958d2604d46704ad8da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cs2cr" podUID="90063c43-e610-4281-a138-4d72595d6f99" Jul 1 08:43:47.452412 containerd[1721]: time="2025-07-01T08:43:47.452366213Z" level=error msg="Failed to destroy network for sandbox \"e84155e10db34a28279d9dfff63563ec0d54bb8dc02c4fc070e0620c07b53d6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.454264 systemd[1]: run-netns-cni\x2d131b2a52\x2da2d9\x2df0a6\x2d16ed\x2dca8987c9e7ad.mount: Deactivated successfully. Jul 1 08:43:47.457311 containerd[1721]: time="2025-07-01T08:43:47.457245405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56897d68d6-r6gjf,Uid:6eeb1842-9b66-4bc5-aa6a-64536984e255,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84155e10db34a28279d9dfff63563ec0d54bb8dc02c4fc070e0620c07b53d6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.457705 kubelet[3174]: E0701 08:43:47.457462 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84155e10db34a28279d9dfff63563ec0d54bb8dc02c4fc070e0620c07b53d6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.457705 kubelet[3174]: E0701 08:43:47.457516 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84155e10db34a28279d9dfff63563ec0d54bb8dc02c4fc070e0620c07b53d6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56897d68d6-r6gjf" Jul 1 08:43:47.457705 kubelet[3174]: E0701 08:43:47.457534 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84155e10db34a28279d9dfff63563ec0d54bb8dc02c4fc070e0620c07b53d6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56897d68d6-r6gjf" Jul 1 08:43:47.457848 kubelet[3174]: E0701 08:43:47.457570 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56897d68d6-r6gjf_calico-system(6eeb1842-9b66-4bc5-aa6a-64536984e255)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56897d68d6-r6gjf_calico-system(6eeb1842-9b66-4bc5-aa6a-64536984e255)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e84155e10db34a28279d9dfff63563ec0d54bb8dc02c4fc070e0620c07b53d6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56897d68d6-r6gjf" podUID="6eeb1842-9b66-4bc5-aa6a-64536984e255" Jul 1 08:43:47.462721 containerd[1721]: time="2025-07-01T08:43:47.462691995Z" level=error msg="Failed to destroy network for sandbox \"039df3d737a367ea266d331c178ca9aadeb4000bf42f59f5fdba253c5dc5f1ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.465650 systemd[1]: run-netns-cni\x2d5c2ae966\x2da8a6\x2db9a5\x2d2347\x2d8389f760dc1a.mount: Deactivated successfully. Jul 1 08:43:47.466178 containerd[1721]: time="2025-07-01T08:43:47.466148169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l7f2m,Uid:cdf46a22-d014-4b57-97a1-328bba56303c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"039df3d737a367ea266d331c178ca9aadeb4000bf42f59f5fdba253c5dc5f1ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.466773 kubelet[3174]: E0701 08:43:47.466407 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"039df3d737a367ea266d331c178ca9aadeb4000bf42f59f5fdba253c5dc5f1ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.466773 kubelet[3174]: E0701 08:43:47.466450 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"039df3d737a367ea266d331c178ca9aadeb4000bf42f59f5fdba253c5dc5f1ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l7f2m" Jul 1 08:43:47.466773 kubelet[3174]: E0701 08:43:47.466468 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"039df3d737a367ea266d331c178ca9aadeb4000bf42f59f5fdba253c5dc5f1ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l7f2m" Jul 1 08:43:47.466948 kubelet[3174]: E0701 08:43:47.466532 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-l7f2m_kube-system(cdf46a22-d014-4b57-97a1-328bba56303c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-l7f2m_kube-system(cdf46a22-d014-4b57-97a1-328bba56303c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"039df3d737a367ea266d331c178ca9aadeb4000bf42f59f5fdba253c5dc5f1ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-l7f2m" podUID="cdf46a22-d014-4b57-97a1-328bba56303c" Jul 1 08:43:47.470662 containerd[1721]: time="2025-07-01T08:43:47.470620284Z" level=error msg="Failed to destroy network for sandbox \"ea3d5370e6971c211de363e389c372c4fbf41217c3b2802105e5d95138ae5574\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.473080 systemd[1]: run-netns-cni\x2dcdb89eac\x2d1df5\x2da6ad\x2d5460\x2df933ecbfc36e.mount: Deactivated successfully. Jul 1 08:43:47.477376 containerd[1721]: time="2025-07-01T08:43:47.477346093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cbf49894c-dqk7k,Uid:da129cb9-8f64-41b6-a573-9988f86280fc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea3d5370e6971c211de363e389c372c4fbf41217c3b2802105e5d95138ae5574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.480145 kubelet[3174]: E0701 08:43:47.479820 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea3d5370e6971c211de363e389c372c4fbf41217c3b2802105e5d95138ae5574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.480145 kubelet[3174]: E0701 08:43:47.479857 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea3d5370e6971c211de363e389c372c4fbf41217c3b2802105e5d95138ae5574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cbf49894c-dqk7k" Jul 1 08:43:47.480145 kubelet[3174]: E0701 08:43:47.479875 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea3d5370e6971c211de363e389c372c4fbf41217c3b2802105e5d95138ae5574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cbf49894c-dqk7k" Jul 1 08:43:47.480270 kubelet[3174]: E0701 08:43:47.479906 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cbf49894c-dqk7k_calico-system(da129cb9-8f64-41b6-a573-9988f86280fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cbf49894c-dqk7k_calico-system(da129cb9-8f64-41b6-a573-9988f86280fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea3d5370e6971c211de363e389c372c4fbf41217c3b2802105e5d95138ae5574\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cbf49894c-dqk7k" podUID="da129cb9-8f64-41b6-a573-9988f86280fc" Jul 1 08:43:47.482730 containerd[1721]: time="2025-07-01T08:43:47.482636504Z" level=error msg="Failed to destroy network for sandbox \"e13bf6f544bdf439972f8f6a84b6094a7b7c9edfc2c0345352d539fd193fafd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.485571 containerd[1721]: time="2025-07-01T08:43:47.485535421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-td68f,Uid:ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e13bf6f544bdf439972f8f6a84b6094a7b7c9edfc2c0345352d539fd193fafd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.485703 kubelet[3174]: E0701 08:43:47.485660 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e13bf6f544bdf439972f8f6a84b6094a7b7c9edfc2c0345352d539fd193fafd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:47.485703 kubelet[3174]: E0701 08:43:47.485691 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e13bf6f544bdf439972f8f6a84b6094a7b7c9edfc2c0345352d539fd193fafd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-td68f" Jul 1 08:43:47.485780 kubelet[3174]: E0701 08:43:47.485708 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e13bf6f544bdf439972f8f6a84b6094a7b7c9edfc2c0345352d539fd193fafd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-td68f" Jul 1 08:43:47.485806 kubelet[3174]: E0701 08:43:47.485777 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-td68f_calico-system(ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-td68f_calico-system(ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e13bf6f544bdf439972f8f6a84b6094a7b7c9edfc2c0345352d539fd193fafd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-td68f" podUID="ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2" Jul 1 08:43:47.746947 kubelet[3174]: E0701 08:43:47.746861 3174 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 1 08:43:47.746947 kubelet[3174]: E0701 08:43:47.746930 3174 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20d55f92-2563-4989-a7d2-36b21d3eab8a-calico-apiserver-certs podName:20d55f92-2563-4989-a7d2-36b21d3eab8a nodeName:}" failed. No retries permitted until 2025-07-01 08:43:48.246914666 +0000 UTC m=+34.092525745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/20d55f92-2563-4989-a7d2-36b21d3eab8a-calico-apiserver-certs") pod "calico-apiserver-794ddc87fb-xsjw6" (UID: "20d55f92-2563-4989-a7d2-36b21d3eab8a") : failed to sync secret cache: timed out waiting for the condition Jul 1 08:43:47.747109 kubelet[3174]: E0701 08:43:47.746861 3174 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 1 08:43:47.747109 kubelet[3174]: E0701 08:43:47.747090 3174 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/252bf1bb-5284-4a70-8a20-be1644a678b7-calico-apiserver-certs podName:252bf1bb-5284-4a70-8a20-be1644a678b7 nodeName:}" failed. No retries permitted until 2025-07-01 08:43:48.247076605 +0000 UTC m=+34.092687692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/252bf1bb-5284-4a70-8a20-be1644a678b7-calico-apiserver-certs") pod "calico-apiserver-794ddc87fb-4c49p" (UID: "252bf1bb-5284-4a70-8a20-be1644a678b7") : failed to sync secret cache: timed out waiting for the condition Jul 1 08:43:48.244323 systemd[1]: Created slice kubepods-besteffort-pod921bd48b_6a52_4928_98d5_dc65a968d1c0.slice - libcontainer container kubepods-besteffort-pod921bd48b_6a52_4928_98d5_dc65a968d1c0.slice. Jul 1 08:43:48.246013 containerd[1721]: time="2025-07-01T08:43:48.245980146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cpd2x,Uid:921bd48b-6a52-4928-98d5-dc65a968d1c0,Namespace:calico-system,Attempt:0,}" Jul 1 08:43:48.294203 containerd[1721]: time="2025-07-01T08:43:48.294151429Z" level=error msg="Failed to destroy network for sandbox \"e0c8e37a333d5ff25cfe491f1ed09a04c43591f5fef0d2bbd88aae6cdb2a0847\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:48.299356 containerd[1721]: time="2025-07-01T08:43:48.299318799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cpd2x,Uid:921bd48b-6a52-4928-98d5-dc65a968d1c0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c8e37a333d5ff25cfe491f1ed09a04c43591f5fef0d2bbd88aae6cdb2a0847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:48.299531 kubelet[3174]: E0701 08:43:48.299493 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c8e37a333d5ff25cfe491f1ed09a04c43591f5fef0d2bbd88aae6cdb2a0847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:48.299579 kubelet[3174]: E0701 08:43:48.299555 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c8e37a333d5ff25cfe491f1ed09a04c43591f5fef0d2bbd88aae6cdb2a0847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cpd2x" Jul 1 08:43:48.299579 kubelet[3174]: E0701 08:43:48.299573 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c8e37a333d5ff25cfe491f1ed09a04c43591f5fef0d2bbd88aae6cdb2a0847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cpd2x" Jul 1 08:43:48.299667 kubelet[3174]: E0701 08:43:48.299612 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cpd2x_calico-system(921bd48b-6a52-4928-98d5-dc65a968d1c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cpd2x_calico-system(921bd48b-6a52-4928-98d5-dc65a968d1c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0c8e37a333d5ff25cfe491f1ed09a04c43591f5fef0d2bbd88aae6cdb2a0847\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cpd2x" podUID="921bd48b-6a52-4928-98d5-dc65a968d1c0" Jul 1 08:43:48.363496 containerd[1721]: time="2025-07-01T08:43:48.363469210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794ddc87fb-xsjw6,Uid:20d55f92-2563-4989-a7d2-36b21d3eab8a,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:43:48.382939 containerd[1721]: time="2025-07-01T08:43:48.382907923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794ddc87fb-4c49p,Uid:252bf1bb-5284-4a70-8a20-be1644a678b7,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:43:48.418864 systemd[1]: run-netns-cni\x2d4357d477\x2de058\x2dc702\x2da5dc\x2d444d071b7aac.mount: Deactivated successfully. Jul 1 08:43:48.466917 containerd[1721]: time="2025-07-01T08:43:48.466786674Z" level=error msg="Failed to destroy network for sandbox \"ed87c4dc6a86526486ff90cb8646e48939adb73bc5ce20f5ea48ef52651cf1bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:48.469516 systemd[1]: run-netns-cni\x2d7ed3a734\x2d87b8\x2def5a\x2dfd0b\x2d3db15bac358e.mount: Deactivated successfully. Jul 1 08:43:48.470965 containerd[1721]: time="2025-07-01T08:43:48.470858534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794ddc87fb-4c49p,Uid:252bf1bb-5284-4a70-8a20-be1644a678b7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed87c4dc6a86526486ff90cb8646e48939adb73bc5ce20f5ea48ef52651cf1bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:48.471187 kubelet[3174]: E0701 08:43:48.471130 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed87c4dc6a86526486ff90cb8646e48939adb73bc5ce20f5ea48ef52651cf1bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:48.471187 kubelet[3174]: E0701 08:43:48.471176 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed87c4dc6a86526486ff90cb8646e48939adb73bc5ce20f5ea48ef52651cf1bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794ddc87fb-4c49p" Jul 1 08:43:48.471458 kubelet[3174]: E0701 08:43:48.471204 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed87c4dc6a86526486ff90cb8646e48939adb73bc5ce20f5ea48ef52651cf1bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794ddc87fb-4c49p" Jul 1 08:43:48.471458 kubelet[3174]: E0701 08:43:48.471246 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-794ddc87fb-4c49p_calico-apiserver(252bf1bb-5284-4a70-8a20-be1644a678b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-794ddc87fb-4c49p_calico-apiserver(252bf1bb-5284-4a70-8a20-be1644a678b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed87c4dc6a86526486ff90cb8646e48939adb73bc5ce20f5ea48ef52651cf1bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-794ddc87fb-4c49p" podUID="252bf1bb-5284-4a70-8a20-be1644a678b7" Jul 1 08:43:48.485767 containerd[1721]: time="2025-07-01T08:43:48.485709549Z" level=error msg="Failed to destroy network for sandbox \"a9e475e4ca65c25fa3af6409059756dc1ad0a35cc5dfda2d61747cc33bf17a90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:48.487346 systemd[1]: run-netns-cni\x2df2994ff4\x2d0dca\x2d796a\x2d4b64\x2dc59086dd0fcf.mount: Deactivated successfully. Jul 1 08:43:48.491496 containerd[1721]: time="2025-07-01T08:43:48.491449718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794ddc87fb-xsjw6,Uid:20d55f92-2563-4989-a7d2-36b21d3eab8a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e475e4ca65c25fa3af6409059756dc1ad0a35cc5dfda2d61747cc33bf17a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:48.491814 kubelet[3174]: E0701 08:43:48.491581 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e475e4ca65c25fa3af6409059756dc1ad0a35cc5dfda2d61747cc33bf17a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:43:48.491814 kubelet[3174]: E0701 08:43:48.491624 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e475e4ca65c25fa3af6409059756dc1ad0a35cc5dfda2d61747cc33bf17a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794ddc87fb-xsjw6" Jul 1 08:43:48.491814 kubelet[3174]: E0701 08:43:48.491637 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e475e4ca65c25fa3af6409059756dc1ad0a35cc5dfda2d61747cc33bf17a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794ddc87fb-xsjw6" Jul 1 08:43:48.491892 kubelet[3174]: E0701 08:43:48.491668 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-794ddc87fb-xsjw6_calico-apiserver(20d55f92-2563-4989-a7d2-36b21d3eab8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-794ddc87fb-xsjw6_calico-apiserver(20d55f92-2563-4989-a7d2-36b21d3eab8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9e475e4ca65c25fa3af6409059756dc1ad0a35cc5dfda2d61747cc33bf17a90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-794ddc87fb-xsjw6" podUID="20d55f92-2563-4989-a7d2-36b21d3eab8a" Jul 1 08:43:55.328532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4143105677.mount: Deactivated successfully. Jul 1 08:43:55.355575 containerd[1721]: time="2025-07-01T08:43:55.355536836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:55.363989 containerd[1721]: time="2025-07-01T08:43:55.363955097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 1 08:43:55.366433 containerd[1721]: time="2025-07-01T08:43:55.366379030Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:55.369505 containerd[1721]: time="2025-07-01T08:43:55.369466378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:55.369783 containerd[1721]: time="2025-07-01T08:43:55.369761178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 8.040970984s" Jul 1 08:43:55.369849 containerd[1721]: time="2025-07-01T08:43:55.369839239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 1 08:43:55.381811 containerd[1721]: time="2025-07-01T08:43:55.381786449Z" level=info msg="CreateContainer within sandbox \"2ec7a2522a6c32b40cb90a620a8ab11c3335bde55fccdbe9961d35efecad1fd5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 1 08:43:55.399059 containerd[1721]: time="2025-07-01T08:43:55.399035004Z" level=info msg="Container 8e75f946df149813a89e15f6088f01d8f975be547efccff59564f7d7eb54a317: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:43:55.402585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3144607628.mount: Deactivated successfully. Jul 1 08:43:55.415784 containerd[1721]: time="2025-07-01T08:43:55.415745243Z" level=info msg="CreateContainer within sandbox \"2ec7a2522a6c32b40cb90a620a8ab11c3335bde55fccdbe9961d35efecad1fd5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8e75f946df149813a89e15f6088f01d8f975be547efccff59564f7d7eb54a317\"" Jul 1 08:43:55.416319 containerd[1721]: time="2025-07-01T08:43:55.416096429Z" level=info msg="StartContainer for \"8e75f946df149813a89e15f6088f01d8f975be547efccff59564f7d7eb54a317\"" Jul 1 08:43:55.417509 containerd[1721]: time="2025-07-01T08:43:55.417482411Z" level=info msg="connecting to shim 8e75f946df149813a89e15f6088f01d8f975be547efccff59564f7d7eb54a317" address="unix:///run/containerd/s/11e05ad58ac838010d229ce5bf5ed618998b4dd18909eb39e7c41b0e369e1918" protocol=ttrpc version=3 Jul 1 08:43:55.432884 systemd[1]: Started cri-containerd-8e75f946df149813a89e15f6088f01d8f975be547efccff59564f7d7eb54a317.scope - libcontainer container 8e75f946df149813a89e15f6088f01d8f975be547efccff59564f7d7eb54a317. Jul 1 08:43:55.470253 containerd[1721]: time="2025-07-01T08:43:55.470025153Z" level=info msg="StartContainer for \"8e75f946df149813a89e15f6088f01d8f975be547efccff59564f7d7eb54a317\" returns successfully" Jul 1 08:43:55.665476 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 1 08:43:55.665576 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 1 08:43:55.793307 kubelet[3174]: I0701 08:43:55.793276 3174 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eeb1842-9b66-4bc5-aa6a-64536984e255-whisker-ca-bundle\") pod \"6eeb1842-9b66-4bc5-aa6a-64536984e255\" (UID: \"6eeb1842-9b66-4bc5-aa6a-64536984e255\") " Jul 1 08:43:55.793597 kubelet[3174]: I0701 08:43:55.793319 3174 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6eeb1842-9b66-4bc5-aa6a-64536984e255-whisker-backend-key-pair\") pod \"6eeb1842-9b66-4bc5-aa6a-64536984e255\" (UID: \"6eeb1842-9b66-4bc5-aa6a-64536984e255\") " Jul 1 08:43:55.793597 kubelet[3174]: I0701 08:43:55.793344 3174 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc9tk\" (UniqueName: \"kubernetes.io/projected/6eeb1842-9b66-4bc5-aa6a-64536984e255-kube-api-access-mc9tk\") pod \"6eeb1842-9b66-4bc5-aa6a-64536984e255\" (UID: \"6eeb1842-9b66-4bc5-aa6a-64536984e255\") " Jul 1 08:43:55.795644 kubelet[3174]: I0701 08:43:55.795609 3174 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eeb1842-9b66-4bc5-aa6a-64536984e255-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6eeb1842-9b66-4bc5-aa6a-64536984e255" (UID: "6eeb1842-9b66-4bc5-aa6a-64536984e255"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 1 08:43:55.798042 kubelet[3174]: I0701 08:43:55.797616 3174 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eeb1842-9b66-4bc5-aa6a-64536984e255-kube-api-access-mc9tk" (OuterVolumeSpecName: "kube-api-access-mc9tk") pod "6eeb1842-9b66-4bc5-aa6a-64536984e255" (UID: "6eeb1842-9b66-4bc5-aa6a-64536984e255"). InnerVolumeSpecName "kube-api-access-mc9tk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 1 08:43:55.799945 kubelet[3174]: I0701 08:43:55.799917 3174 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eeb1842-9b66-4bc5-aa6a-64536984e255-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6eeb1842-9b66-4bc5-aa6a-64536984e255" (UID: "6eeb1842-9b66-4bc5-aa6a-64536984e255"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 1 08:43:55.894025 kubelet[3174]: I0701 08:43:55.894001 3174 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6eeb1842-9b66-4bc5-aa6a-64536984e255-whisker-backend-key-pair\") on node \"ci-9999.9.9-s-875ad0e937\" DevicePath \"\"" Jul 1 08:43:55.894025 kubelet[3174]: I0701 08:43:55.894026 3174 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mc9tk\" (UniqueName: \"kubernetes.io/projected/6eeb1842-9b66-4bc5-aa6a-64536984e255-kube-api-access-mc9tk\") on node \"ci-9999.9.9-s-875ad0e937\" DevicePath \"\"" Jul 1 08:43:55.894139 kubelet[3174]: I0701 08:43:55.894036 3174 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eeb1842-9b66-4bc5-aa6a-64536984e255-whisker-ca-bundle\") on node \"ci-9999.9.9-s-875ad0e937\" DevicePath \"\"" Jul 1 08:43:56.245015 systemd[1]: Removed slice kubepods-besteffort-pod6eeb1842_9b66_4bc5_aa6a_64536984e255.slice - libcontainer container kubepods-besteffort-pod6eeb1842_9b66_4bc5_aa6a_64536984e255.slice. Jul 1 08:43:56.328058 systemd[1]: var-lib-kubelet-pods-6eeb1842\x2d9b66\x2d4bc5\x2daa6a\x2d64536984e255-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmc9tk.mount: Deactivated successfully. Jul 1 08:43:56.328169 systemd[1]: var-lib-kubelet-pods-6eeb1842\x2d9b66\x2d4bc5\x2daa6a\x2d64536984e255-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 1 08:43:56.357201 kubelet[3174]: I0701 08:43:56.356658 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5gd8x" podStartSLOduration=1.913642191 podStartE2EDuration="24.356641672s" podCreationTimestamp="2025-07-01 08:43:32 +0000 UTC" firstStartedPulling="2025-07-01 08:43:32.927267761 +0000 UTC m=+18.772878856" lastFinishedPulling="2025-07-01 08:43:55.370267246 +0000 UTC m=+41.215878337" observedRunningTime="2025-07-01 08:43:56.356300893 +0000 UTC m=+42.201911987" watchObservedRunningTime="2025-07-01 08:43:56.356641672 +0000 UTC m=+42.202252824" Jul 1 08:43:56.406324 systemd[1]: Created slice kubepods-besteffort-pode8ba560d_132e_4eb6_b383_bee8beb0a287.slice - libcontainer container kubepods-besteffort-pode8ba560d_132e_4eb6_b383_bee8beb0a287.slice. Jul 1 08:43:56.497782 kubelet[3174]: I0701 08:43:56.497661 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8ba560d-132e-4eb6-b383-bee8beb0a287-whisker-ca-bundle\") pod \"whisker-6bc9546454-d4ztj\" (UID: \"e8ba560d-132e-4eb6-b383-bee8beb0a287\") " pod="calico-system/whisker-6bc9546454-d4ztj" Jul 1 08:43:56.497782 kubelet[3174]: I0701 08:43:56.497697 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg9hv\" (UniqueName: \"kubernetes.io/projected/e8ba560d-132e-4eb6-b383-bee8beb0a287-kube-api-access-zg9hv\") pod \"whisker-6bc9546454-d4ztj\" (UID: \"e8ba560d-132e-4eb6-b383-bee8beb0a287\") " pod="calico-system/whisker-6bc9546454-d4ztj" Jul 1 08:43:56.497782 kubelet[3174]: I0701 08:43:56.497719 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8ba560d-132e-4eb6-b383-bee8beb0a287-whisker-backend-key-pair\") pod \"whisker-6bc9546454-d4ztj\" (UID: \"e8ba560d-132e-4eb6-b383-bee8beb0a287\") " pod="calico-system/whisker-6bc9546454-d4ztj" Jul 1 08:43:56.711517 containerd[1721]: time="2025-07-01T08:43:56.711467315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bc9546454-d4ztj,Uid:e8ba560d-132e-4eb6-b383-bee8beb0a287,Namespace:calico-system,Attempt:0,}" Jul 1 08:43:56.800887 systemd-networkd[1349]: cali8ca5dff6aa6: Link UP Jul 1 08:43:56.801019 systemd-networkd[1349]: cali8ca5dff6aa6: Gained carrier Jul 1 08:43:56.812638 containerd[1721]: 2025-07-01 08:43:56.736 [INFO][4261] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 1 08:43:56.812638 containerd[1721]: 2025-07-01 08:43:56.743 [INFO][4261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-eth0 whisker-6bc9546454- calico-system e8ba560d-132e-4eb6-b383-bee8beb0a287 877 0 2025-07-01 08:43:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bc9546454 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-9999.9.9-s-875ad0e937 whisker-6bc9546454-d4ztj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8ca5dff6aa6 [] [] }} ContainerID="b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" Namespace="calico-system" Pod="whisker-6bc9546454-d4ztj" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-" Jul 1 08:43:56.812638 containerd[1721]: 2025-07-01 08:43:56.743 [INFO][4261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" Namespace="calico-system" Pod="whisker-6bc9546454-d4ztj" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-eth0" Jul 1 08:43:56.812638 containerd[1721]: 2025-07-01 08:43:56.763 [INFO][4274] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" HandleID="k8s-pod-network.b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" Workload="ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-eth0" Jul 1 08:43:56.812865 containerd[1721]: 2025-07-01 08:43:56.763 [INFO][4274] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" HandleID="k8s-pod-network.b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" Workload="ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5820), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999.9.9-s-875ad0e937", "pod":"whisker-6bc9546454-d4ztj", "timestamp":"2025-07-01 08:43:56.763397053 +0000 UTC"}, Hostname:"ci-9999.9.9-s-875ad0e937", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:43:56.812865 containerd[1721]: 2025-07-01 08:43:56.763 [INFO][4274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:43:56.812865 containerd[1721]: 2025-07-01 08:43:56.763 [INFO][4274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:43:56.812865 containerd[1721]: 2025-07-01 08:43:56.763 [INFO][4274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-s-875ad0e937' Jul 1 08:43:56.812865 containerd[1721]: 2025-07-01 08:43:56.768 [INFO][4274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:56.812865 containerd[1721]: 2025-07-01 08:43:56.772 [INFO][4274] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:56.812865 containerd[1721]: 2025-07-01 08:43:56.774 [INFO][4274] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:56.812865 containerd[1721]: 2025-07-01 08:43:56.777 [INFO][4274] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:56.812865 containerd[1721]: 2025-07-01 08:43:56.778 [INFO][4274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:56.813055 containerd[1721]: 2025-07-01 08:43:56.778 [INFO][4274] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:56.813055 containerd[1721]: 2025-07-01 08:43:56.779 [INFO][4274] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea Jul 1 08:43:56.813055 containerd[1721]: 2025-07-01 08:43:56.785 [INFO][4274] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:56.813055 containerd[1721]: 2025-07-01 08:43:56.789 [INFO][4274] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.129/26] block=192.168.71.128/26 handle="k8s-pod-network.b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:56.813055 containerd[1721]: 2025-07-01 08:43:56.789 [INFO][4274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.129/26] handle="k8s-pod-network.b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:56.813055 containerd[1721]: 2025-07-01 08:43:56.789 [INFO][4274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:43:56.813055 containerd[1721]: 2025-07-01 08:43:56.789 [INFO][4274] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.129/26] IPv6=[] ContainerID="b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" HandleID="k8s-pod-network.b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" Workload="ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-eth0" Jul 1 08:43:56.813177 containerd[1721]: 2025-07-01 08:43:56.791 [INFO][4261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" Namespace="calico-system" Pod="whisker-6bc9546454-d4ztj" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-eth0", GenerateName:"whisker-6bc9546454-", Namespace:"calico-system", SelfLink:"", UID:"e8ba560d-132e-4eb6-b383-bee8beb0a287", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bc9546454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"", Pod:"whisker-6bc9546454-d4ztj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8ca5dff6aa6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:43:56.813177 containerd[1721]: 2025-07-01 08:43:56.791 [INFO][4261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.129/32] ContainerID="b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" Namespace="calico-system" Pod="whisker-6bc9546454-d4ztj" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-eth0" Jul 1 08:43:56.813262 containerd[1721]: 2025-07-01 08:43:56.791 [INFO][4261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ca5dff6aa6 ContainerID="b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" Namespace="calico-system" Pod="whisker-6bc9546454-d4ztj" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-eth0" Jul 1 08:43:56.813262 containerd[1721]: 2025-07-01 08:43:56.801 [INFO][4261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" Namespace="calico-system" Pod="whisker-6bc9546454-d4ztj" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-eth0" Jul 1 08:43:56.813309 containerd[1721]: 2025-07-01 08:43:56.801 [INFO][4261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" Namespace="calico-system" Pod="whisker-6bc9546454-d4ztj" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-eth0", GenerateName:"whisker-6bc9546454-", Namespace:"calico-system", SelfLink:"", UID:"e8ba560d-132e-4eb6-b383-bee8beb0a287", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bc9546454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea", Pod:"whisker-6bc9546454-d4ztj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8ca5dff6aa6", MAC:"22:4b:07:10:69:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:43:56.813364 containerd[1721]: 2025-07-01 08:43:56.810 [INFO][4261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" Namespace="calico-system" Pod="whisker-6bc9546454-d4ztj" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-whisker--6bc9546454--d4ztj-eth0" Jul 1 08:43:56.843474 containerd[1721]: time="2025-07-01T08:43:56.843363549Z" level=info msg="connecting to shim b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea" address="unix:///run/containerd/s/274211aa8c80ba860cf91e44262351f9f991788ddd5d584ea50a1c8c87431ded" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:43:56.861912 systemd[1]: Started cri-containerd-b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea.scope - libcontainer container b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea. Jul 1 08:43:56.896462 containerd[1721]: time="2025-07-01T08:43:56.896442880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bc9546454-d4ztj,Uid:e8ba560d-132e-4eb6-b383-bee8beb0a287,Namespace:calico-system,Attempt:0,} returns sandbox id \"b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea\"" Jul 1 08:43:56.897933 containerd[1721]: time="2025-07-01T08:43:56.897910433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 1 08:43:57.379632 systemd-networkd[1349]: vxlan.calico: Link UP Jul 1 08:43:57.379639 systemd-networkd[1349]: vxlan.calico: Gained carrier Jul 1 08:43:58.242029 kubelet[3174]: I0701 08:43:58.241969 3174 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eeb1842-9b66-4bc5-aa6a-64536984e255" path="/var/lib/kubelet/pods/6eeb1842-9b66-4bc5-aa6a-64536984e255/volumes" Jul 1 08:43:58.284622 containerd[1721]: time="2025-07-01T08:43:58.284584566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:58.286645 containerd[1721]: time="2025-07-01T08:43:58.286612095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 1 08:43:58.289071 containerd[1721]: time="2025-07-01T08:43:58.289044679Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:58.292344 containerd[1721]: time="2025-07-01T08:43:58.292304769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:43:58.292637 containerd[1721]: time="2025-07-01T08:43:58.292617242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.394679459s" Jul 1 08:43:58.292680 containerd[1721]: time="2025-07-01T08:43:58.292645486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 1 08:43:58.294659 containerd[1721]: time="2025-07-01T08:43:58.294318361Z" level=info msg="CreateContainer within sandbox \"b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 1 08:43:58.311774 containerd[1721]: time="2025-07-01T08:43:58.310161341Z" level=info msg="Container 1668ea34d02439eefab423df95ffd197c267be916e44d883cd978c2914be9694: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:43:58.325967 containerd[1721]: time="2025-07-01T08:43:58.325944801Z" level=info msg="CreateContainer within sandbox \"b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1668ea34d02439eefab423df95ffd197c267be916e44d883cd978c2914be9694\"" Jul 1 08:43:58.326387 containerd[1721]: time="2025-07-01T08:43:58.326333007Z" level=info msg="StartContainer for \"1668ea34d02439eefab423df95ffd197c267be916e44d883cd978c2914be9694\"" Jul 1 08:43:58.327504 containerd[1721]: time="2025-07-01T08:43:58.327457958Z" level=info msg="connecting to shim 1668ea34d02439eefab423df95ffd197c267be916e44d883cd978c2914be9694" address="unix:///run/containerd/s/274211aa8c80ba860cf91e44262351f9f991788ddd5d584ea50a1c8c87431ded" protocol=ttrpc version=3 Jul 1 08:43:58.345033 systemd[1]: Started cri-containerd-1668ea34d02439eefab423df95ffd197c267be916e44d883cd978c2914be9694.scope - libcontainer container 1668ea34d02439eefab423df95ffd197c267be916e44d883cd978c2914be9694. Jul 1 08:43:58.383556 containerd[1721]: time="2025-07-01T08:43:58.383519239Z" level=info msg="StartContainer for \"1668ea34d02439eefab423df95ffd197c267be916e44d883cd978c2914be9694\" returns successfully" Jul 1 08:43:58.384468 containerd[1721]: time="2025-07-01T08:43:58.384446558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 1 08:43:58.438891 systemd-networkd[1349]: cali8ca5dff6aa6: Gained IPv6LL Jul 1 08:43:58.439113 systemd-networkd[1349]: vxlan.calico: Gained IPv6LL Jul 1 08:43:59.239935 containerd[1721]: time="2025-07-01T08:43:59.239877675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-td68f,Uid:ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2,Namespace:calico-system,Attempt:0,}" Jul 1 08:43:59.331558 systemd-networkd[1349]: calic3fd4e3b579: Link UP Jul 1 08:43:59.332507 systemd-networkd[1349]: calic3fd4e3b579: Gained carrier Jul 1 08:43:59.345724 containerd[1721]: 2025-07-01 08:43:59.281 [INFO][4565] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-eth0 goldmane-768f4c5c69- calico-system ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2 803 0 2025-07-01 08:43:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-9999.9.9-s-875ad0e937 goldmane-768f4c5c69-td68f eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic3fd4e3b579 [] [] }} ContainerID="8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" Namespace="calico-system" Pod="goldmane-768f4c5c69-td68f" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-" Jul 1 08:43:59.345724 containerd[1721]: 2025-07-01 08:43:59.281 [INFO][4565] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" Namespace="calico-system" Pod="goldmane-768f4c5c69-td68f" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-eth0" Jul 1 08:43:59.345724 containerd[1721]: 2025-07-01 08:43:59.300 [INFO][4577] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" HandleID="k8s-pod-network.8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" Workload="ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-eth0" Jul 1 08:43:59.346089 containerd[1721]: 2025-07-01 08:43:59.300 [INFO][4577] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" HandleID="k8s-pod-network.8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" Workload="ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999.9.9-s-875ad0e937", "pod":"goldmane-768f4c5c69-td68f", "timestamp":"2025-07-01 08:43:59.300616767 +0000 UTC"}, Hostname:"ci-9999.9.9-s-875ad0e937", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:43:59.346089 containerd[1721]: 2025-07-01 08:43:59.300 [INFO][4577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:43:59.346089 containerd[1721]: 2025-07-01 08:43:59.300 [INFO][4577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:43:59.346089 containerd[1721]: 2025-07-01 08:43:59.301 [INFO][4577] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-s-875ad0e937' Jul 1 08:43:59.346089 containerd[1721]: 2025-07-01 08:43:59.305 [INFO][4577] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:59.346089 containerd[1721]: 2025-07-01 08:43:59.308 [INFO][4577] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:59.346089 containerd[1721]: 2025-07-01 08:43:59.311 [INFO][4577] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:59.346089 containerd[1721]: 2025-07-01 08:43:59.313 [INFO][4577] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:59.346089 containerd[1721]: 2025-07-01 08:43:59.315 [INFO][4577] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:59.346281 containerd[1721]: 2025-07-01 08:43:59.316 [INFO][4577] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:59.346281 containerd[1721]: 2025-07-01 08:43:59.317 [INFO][4577] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd Jul 1 08:43:59.346281 containerd[1721]: 2025-07-01 08:43:59.320 [INFO][4577] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:59.346281 containerd[1721]: 2025-07-01 08:43:59.327 [INFO][4577] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.130/26] block=192.168.71.128/26 handle="k8s-pod-network.8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:59.346281 containerd[1721]: 2025-07-01 08:43:59.327 [INFO][4577] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.130/26] handle="k8s-pod-network.8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:43:59.346281 containerd[1721]: 2025-07-01 08:43:59.327 [INFO][4577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:43:59.346281 containerd[1721]: 2025-07-01 08:43:59.327 [INFO][4577] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.130/26] IPv6=[] ContainerID="8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" HandleID="k8s-pod-network.8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" Workload="ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-eth0" Jul 1 08:43:59.346408 containerd[1721]: 2025-07-01 08:43:59.329 [INFO][4565] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" Namespace="calico-system" Pod="goldmane-768f4c5c69-td68f" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"", Pod:"goldmane-768f4c5c69-td68f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic3fd4e3b579", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:43:59.346408 containerd[1721]: 2025-07-01 08:43:59.329 [INFO][4565] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.130/32] ContainerID="8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" Namespace="calico-system" Pod="goldmane-768f4c5c69-td68f" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-eth0" Jul 1 08:43:59.346493 containerd[1721]: 2025-07-01 08:43:59.329 [INFO][4565] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3fd4e3b579 ContainerID="8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" Namespace="calico-system" Pod="goldmane-768f4c5c69-td68f" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-eth0" Jul 1 08:43:59.346493 containerd[1721]: 2025-07-01 08:43:59.332 [INFO][4565] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" Namespace="calico-system" Pod="goldmane-768f4c5c69-td68f" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-eth0" Jul 1 08:43:59.346531 containerd[1721]: 2025-07-01 08:43:59.333 [INFO][4565] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" Namespace="calico-system" Pod="goldmane-768f4c5c69-td68f" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd", Pod:"goldmane-768f4c5c69-td68f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic3fd4e3b579", MAC:"3a:97:84:c5:0a:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:43:59.346584 containerd[1721]: 2025-07-01 08:43:59.343 [INFO][4565] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" Namespace="calico-system" Pod="goldmane-768f4c5c69-td68f" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-goldmane--768f4c5c69--td68f-eth0" Jul 1 08:43:59.385941 containerd[1721]: time="2025-07-01T08:43:59.385885682Z" level=info msg="connecting to shim 8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd" address="unix:///run/containerd/s/70ea202cd2ea48ec1e0bba1f4dad0cd10275a4f38a40e6c67182e7b7bd9bbab3" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:43:59.404880 systemd[1]: Started cri-containerd-8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd.scope - libcontainer container 8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd. Jul 1 08:43:59.436512 containerd[1721]: time="2025-07-01T08:43:59.436487430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-td68f,Uid:ad4e2201-5a2c-4e2f-b6d1-01bb7791ddb2,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd\"" Jul 1 08:44:00.242481 containerd[1721]: time="2025-07-01T08:44:00.242233602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cbf49894c-dqk7k,Uid:da129cb9-8f64-41b6-a573-9988f86280fc,Namespace:calico-system,Attempt:0,}" Jul 1 08:44:00.242481 containerd[1721]: time="2025-07-01T08:44:00.242272008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cs2cr,Uid:90063c43-e610-4281-a138-4d72595d6f99,Namespace:kube-system,Attempt:0,}" Jul 1 08:44:00.242651 containerd[1721]: time="2025-07-01T08:44:00.242613953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l7f2m,Uid:cdf46a22-d014-4b57-97a1-328bba56303c,Namespace:kube-system,Attempt:0,}" Jul 1 08:44:00.242820 containerd[1721]: time="2025-07-01T08:44:00.242233772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794ddc87fb-4c49p,Uid:252bf1bb-5284-4a70-8a20-be1644a678b7,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:44:00.430961 systemd-networkd[1349]: califb25b451c99: Link UP Jul 1 08:44:00.433004 systemd-networkd[1349]: califb25b451c99: Gained carrier Jul 1 08:44:00.449495 containerd[1721]: 2025-07-01 08:44:00.331 [INFO][4640] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-eth0 calico-kube-controllers-cbf49894c- calico-system da129cb9-8f64-41b6-a573-9988f86280fc 806 0 2025-07-01 08:43:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cbf49894c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-9999.9.9-s-875ad0e937 calico-kube-controllers-cbf49894c-dqk7k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califb25b451c99 [] [] }} ContainerID="e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" Namespace="calico-system" Pod="calico-kube-controllers-cbf49894c-dqk7k" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-" Jul 1 08:44:00.449495 containerd[1721]: 2025-07-01 08:44:00.332 [INFO][4640] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" Namespace="calico-system" Pod="calico-kube-controllers-cbf49894c-dqk7k" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-eth0" Jul 1 08:44:00.449495 containerd[1721]: 2025-07-01 08:44:00.383 [INFO][4689] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" HandleID="k8s-pod-network.e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" Workload="ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-eth0" Jul 1 08:44:00.449964 containerd[1721]: 2025-07-01 08:44:00.383 [INFO][4689] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" HandleID="k8s-pod-network.e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" Workload="ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f090), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999.9.9-s-875ad0e937", "pod":"calico-kube-controllers-cbf49894c-dqk7k", "timestamp":"2025-07-01 08:44:00.383262955 +0000 UTC"}, Hostname:"ci-9999.9.9-s-875ad0e937", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:44:00.449964 containerd[1721]: 2025-07-01 08:44:00.383 [INFO][4689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:44:00.449964 containerd[1721]: 2025-07-01 08:44:00.383 [INFO][4689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:44:00.449964 containerd[1721]: 2025-07-01 08:44:00.383 [INFO][4689] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-s-875ad0e937' Jul 1 08:44:00.449964 containerd[1721]: 2025-07-01 08:44:00.389 [INFO][4689] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.449964 containerd[1721]: 2025-07-01 08:44:00.394 [INFO][4689] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.449964 containerd[1721]: 2025-07-01 08:44:00.400 [INFO][4689] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.449964 containerd[1721]: 2025-07-01 08:44:00.404 [INFO][4689] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.449964 containerd[1721]: 2025-07-01 08:44:00.407 [INFO][4689] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.450171 containerd[1721]: 2025-07-01 08:44:00.408 [INFO][4689] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.450171 containerd[1721]: 2025-07-01 08:44:00.409 [INFO][4689] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f Jul 1 08:44:00.450171 containerd[1721]: 2025-07-01 08:44:00.416 [INFO][4689] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.450171 containerd[1721]: 2025-07-01 08:44:00.424 [INFO][4689] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.131/26] block=192.168.71.128/26 handle="k8s-pod-network.e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.450171 containerd[1721]: 2025-07-01 08:44:00.424 [INFO][4689] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.131/26] handle="k8s-pod-network.e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.450171 containerd[1721]: 2025-07-01 08:44:00.425 [INFO][4689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:44:00.450171 containerd[1721]: 2025-07-01 08:44:00.425 [INFO][4689] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.131/26] IPv6=[] ContainerID="e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" HandleID="k8s-pod-network.e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" Workload="ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-eth0" Jul 1 08:44:00.450308 containerd[1721]: 2025-07-01 08:44:00.426 [INFO][4640] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" Namespace="calico-system" Pod="calico-kube-controllers-cbf49894c-dqk7k" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-eth0", GenerateName:"calico-kube-controllers-cbf49894c-", Namespace:"calico-system", SelfLink:"", UID:"da129cb9-8f64-41b6-a573-9988f86280fc", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cbf49894c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"", Pod:"calico-kube-controllers-cbf49894c-dqk7k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb25b451c99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:44:00.450370 containerd[1721]: 2025-07-01 08:44:00.427 [INFO][4640] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.131/32] ContainerID="e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" Namespace="calico-system" Pod="calico-kube-controllers-cbf49894c-dqk7k" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-eth0" Jul 1 08:44:00.450370 containerd[1721]: 2025-07-01 08:44:00.427 [INFO][4640] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb25b451c99 ContainerID="e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" Namespace="calico-system" Pod="calico-kube-controllers-cbf49894c-dqk7k" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-eth0" Jul 1 08:44:00.450370 containerd[1721]: 2025-07-01 08:44:00.434 [INFO][4640] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" Namespace="calico-system" Pod="calico-kube-controllers-cbf49894c-dqk7k" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-eth0" Jul 1 08:44:00.450439 containerd[1721]: 2025-07-01 08:44:00.435 [INFO][4640] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" Namespace="calico-system" Pod="calico-kube-controllers-cbf49894c-dqk7k" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-eth0", GenerateName:"calico-kube-controllers-cbf49894c-", Namespace:"calico-system", SelfLink:"", UID:"da129cb9-8f64-41b6-a573-9988f86280fc", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cbf49894c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f", Pod:"calico-kube-controllers-cbf49894c-dqk7k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb25b451c99", MAC:"96:8d:d9:78:79:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:44:00.450498 containerd[1721]: 2025-07-01 08:44:00.446 [INFO][4640] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" Namespace="calico-system" Pod="calico-kube-controllers-cbf49894c-dqk7k" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--kube--controllers--cbf49894c--dqk7k-eth0" Jul 1 08:44:00.516017 containerd[1721]: time="2025-07-01T08:44:00.515317614Z" level=info msg="connecting to shim e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f" address="unix:///run/containerd/s/4285149128f920ae5b5d6b199fb1b29e6fa38c8303959714073fb07291f7f007" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:44:00.559243 systemd-networkd[1349]: cali4a46752e57b: Link UP Jul 1 08:44:00.560265 systemd-networkd[1349]: cali4a46752e57b: Gained carrier Jul 1 08:44:00.560974 systemd[1]: Started cri-containerd-e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f.scope - libcontainer container e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f. Jul 1 08:44:00.586932 containerd[1721]: 2025-07-01 08:44:00.333 [INFO][4660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-eth0 coredns-668d6bf9bc- kube-system cdf46a22-d014-4b57-97a1-328bba56303c 809 0 2025-07-01 08:43:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-9999.9.9-s-875ad0e937 coredns-668d6bf9bc-l7f2m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4a46752e57b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7f2m" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-" Jul 1 08:44:00.586932 containerd[1721]: 2025-07-01 08:44:00.335 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7f2m" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-eth0" Jul 1 08:44:00.586932 containerd[1721]: 2025-07-01 08:44:00.413 [INFO][4691] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" HandleID="k8s-pod-network.d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" Workload="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-eth0" Jul 1 08:44:00.587319 containerd[1721]: 2025-07-01 08:44:00.413 [INFO][4691] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" HandleID="k8s-pod-network.d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" Workload="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5e80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-9999.9.9-s-875ad0e937", "pod":"coredns-668d6bf9bc-l7f2m", "timestamp":"2025-07-01 08:44:00.413806682 +0000 UTC"}, Hostname:"ci-9999.9.9-s-875ad0e937", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:44:00.587319 containerd[1721]: 2025-07-01 08:44:00.413 [INFO][4691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:44:00.587319 containerd[1721]: 2025-07-01 08:44:00.424 [INFO][4691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:44:00.587319 containerd[1721]: 2025-07-01 08:44:00.424 [INFO][4691] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-s-875ad0e937' Jul 1 08:44:00.587319 containerd[1721]: 2025-07-01 08:44:00.489 [INFO][4691] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.587319 containerd[1721]: 2025-07-01 08:44:00.494 [INFO][4691] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.587319 containerd[1721]: 2025-07-01 08:44:00.504 [INFO][4691] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.587319 containerd[1721]: 2025-07-01 08:44:00.507 [INFO][4691] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.587319 containerd[1721]: 2025-07-01 08:44:00.510 [INFO][4691] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.588265 containerd[1721]: 2025-07-01 08:44:00.510 [INFO][4691] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.588265 containerd[1721]: 2025-07-01 08:44:00.511 [INFO][4691] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76 Jul 1 08:44:00.588265 containerd[1721]: 2025-07-01 08:44:00.521 [INFO][4691] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.588265 containerd[1721]: 2025-07-01 08:44:00.543 [INFO][4691] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.132/26] block=192.168.71.128/26 handle="k8s-pod-network.d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.588265 containerd[1721]: 2025-07-01 08:44:00.543 [INFO][4691] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.132/26] handle="k8s-pod-network.d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.588265 containerd[1721]: 2025-07-01 08:44:00.544 [INFO][4691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:44:00.588265 containerd[1721]: 2025-07-01 08:44:00.544 [INFO][4691] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.132/26] IPv6=[] ContainerID="d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" HandleID="k8s-pod-network.d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" Workload="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-eth0" Jul 1 08:44:00.588564 containerd[1721]: 2025-07-01 08:44:00.550 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7f2m" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cdf46a22-d014-4b57-97a1-328bba56303c", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"", Pod:"coredns-668d6bf9bc-l7f2m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a46752e57b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:44:00.588564 containerd[1721]: 2025-07-01 08:44:00.551 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.132/32] ContainerID="d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7f2m" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-eth0" Jul 1 08:44:00.588564 containerd[1721]: 2025-07-01 08:44:00.552 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a46752e57b ContainerID="d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7f2m" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-eth0" Jul 1 08:44:00.588564 containerd[1721]: 2025-07-01 08:44:00.561 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7f2m" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-eth0" Jul 1 08:44:00.588564 containerd[1721]: 2025-07-01 08:44:00.561 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7f2m" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cdf46a22-d014-4b57-97a1-328bba56303c", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76", Pod:"coredns-668d6bf9bc-l7f2m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a46752e57b", MAC:"66:54:d0:39:dc:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:44:00.588564 containerd[1721]: 2025-07-01 08:44:00.583 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7f2m" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--l7f2m-eth0" Jul 1 08:44:00.650452 containerd[1721]: time="2025-07-01T08:44:00.650402798Z" level=info msg="connecting to shim d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76" address="unix:///run/containerd/s/aae32b20ce7478885a992cc061f200a0c1912abc57488654b731432ce12d8169" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:44:00.681578 containerd[1721]: time="2025-07-01T08:44:00.681526471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cbf49894c-dqk7k,Uid:da129cb9-8f64-41b6-a573-9988f86280fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f\"" Jul 1 08:44:00.683040 systemd-networkd[1349]: calid8a65c00437: Link UP Jul 1 08:44:00.687275 systemd-networkd[1349]: calid8a65c00437: Gained carrier Jul 1 08:44:00.721080 systemd[1]: Started cri-containerd-d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76.scope - libcontainer container d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76. Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.346 [INFO][4649] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-eth0 coredns-668d6bf9bc- kube-system 90063c43-e610-4281-a138-4d72595d6f99 799 0 2025-07-01 08:43:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-9999.9.9-s-875ad0e937 coredns-668d6bf9bc-cs2cr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid8a65c00437 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" Namespace="kube-system" Pod="coredns-668d6bf9bc-cs2cr" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-" Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.346 [INFO][4649] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" Namespace="kube-system" Pod="coredns-668d6bf9bc-cs2cr" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-eth0" Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.416 [INFO][4699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" HandleID="k8s-pod-network.8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" Workload="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-eth0" Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.416 [INFO][4699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" HandleID="k8s-pod-network.8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" Workload="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032cdb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-9999.9.9-s-875ad0e937", "pod":"coredns-668d6bf9bc-cs2cr", "timestamp":"2025-07-01 08:44:00.416234648 +0000 UTC"}, Hostname:"ci-9999.9.9-s-875ad0e937", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.417 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.544 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.544 [INFO][4699] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-s-875ad0e937' Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.591 [INFO][4699] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.599 [INFO][4699] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.616 [INFO][4699] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.617 [INFO][4699] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.621 [INFO][4699] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.621 [INFO][4699] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.625 [INFO][4699] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83 Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.644 [INFO][4699] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.660 [INFO][4699] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.133/26] block=192.168.71.128/26 handle="k8s-pod-network.8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.660 [INFO][4699] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.133/26] handle="k8s-pod-network.8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.660 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:44:00.724514 containerd[1721]: 2025-07-01 08:44:00.660 [INFO][4699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.133/26] IPv6=[] ContainerID="8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" HandleID="k8s-pod-network.8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" Workload="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-eth0" Jul 1 08:44:00.725706 containerd[1721]: 2025-07-01 08:44:00.670 [INFO][4649] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" Namespace="kube-system" Pod="coredns-668d6bf9bc-cs2cr" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"90063c43-e610-4281-a138-4d72595d6f99", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"", Pod:"coredns-668d6bf9bc-cs2cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid8a65c00437", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:44:00.725706 containerd[1721]: 2025-07-01 08:44:00.671 [INFO][4649] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.133/32] ContainerID="8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" Namespace="kube-system" Pod="coredns-668d6bf9bc-cs2cr" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-eth0" Jul 1 08:44:00.725706 containerd[1721]: 2025-07-01 08:44:00.674 [INFO][4649] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8a65c00437 ContainerID="8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" Namespace="kube-system" Pod="coredns-668d6bf9bc-cs2cr" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-eth0" Jul 1 08:44:00.725706 containerd[1721]: 2025-07-01 08:44:00.692 [INFO][4649] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" Namespace="kube-system" Pod="coredns-668d6bf9bc-cs2cr" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-eth0" Jul 1 08:44:00.725706 containerd[1721]: 2025-07-01 08:44:00.695 [INFO][4649] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" Namespace="kube-system" Pod="coredns-668d6bf9bc-cs2cr" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"90063c43-e610-4281-a138-4d72595d6f99", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83", Pod:"coredns-668d6bf9bc-cs2cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid8a65c00437", MAC:"4a:3a:4d:75:1f:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:44:00.725706 containerd[1721]: 2025-07-01 08:44:00.718 [INFO][4649] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" Namespace="kube-system" Pod="coredns-668d6bf9bc-cs2cr" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-coredns--668d6bf9bc--cs2cr-eth0" Jul 1 08:44:00.776456 systemd-networkd[1349]: calie926061f8c6: Link UP Jul 1 08:44:00.780791 systemd-networkd[1349]: calie926061f8c6: Gained carrier Jul 1 08:44:00.796775 containerd[1721]: time="2025-07-01T08:44:00.796348194Z" level=info msg="connecting to shim 8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83" address="unix:///run/containerd/s/0fee3d370e94c9890c0d75ce06c4dcc6b24180d8d4e196c79beff0a3a76f2909" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.353 [INFO][4672] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-eth0 calico-apiserver-794ddc87fb- calico-apiserver 252bf1bb-5284-4a70-8a20-be1644a678b7 810 0 2025-07-01 08:43:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:794ddc87fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-9999.9.9-s-875ad0e937 calico-apiserver-794ddc87fb-4c49p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie926061f8c6 [] [] }} ContainerID="7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-4c49p" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-" Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.353 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-4c49p" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-eth0" Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.417 [INFO][4702] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" HandleID="k8s-pod-network.7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" Workload="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-eth0" Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.417 [INFO][4702] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" HandleID="k8s-pod-network.7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" Workload="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f970), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-9999.9.9-s-875ad0e937", "pod":"calico-apiserver-794ddc87fb-4c49p", "timestamp":"2025-07-01 08:44:00.417512028 +0000 UTC"}, Hostname:"ci-9999.9.9-s-875ad0e937", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.417 [INFO][4702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.661 [INFO][4702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.661 [INFO][4702] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-s-875ad0e937' Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.692 [INFO][4702] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.707 [INFO][4702] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.720 [INFO][4702] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.727 [INFO][4702] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.733 [INFO][4702] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.733 [INFO][4702] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.735 [INFO][4702] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51 Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.745 [INFO][4702] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.760 [INFO][4702] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.134/26] block=192.168.71.128/26 handle="k8s-pod-network.7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.760 [INFO][4702] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.134/26] handle="k8s-pod-network.7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.760 [INFO][4702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:44:00.805309 containerd[1721]: 2025-07-01 08:44:00.760 [INFO][4702] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.134/26] IPv6=[] ContainerID="7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" HandleID="k8s-pod-network.7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" Workload="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-eth0" Jul 1 08:44:00.805973 containerd[1721]: 2025-07-01 08:44:00.766 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-4c49p" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-eth0", GenerateName:"calico-apiserver-794ddc87fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"252bf1bb-5284-4a70-8a20-be1644a678b7", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794ddc87fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"", Pod:"calico-apiserver-794ddc87fb-4c49p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie926061f8c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:44:00.805973 containerd[1721]: 2025-07-01 08:44:00.767 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.134/32] ContainerID="7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-4c49p" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-eth0" Jul 1 08:44:00.805973 containerd[1721]: 2025-07-01 08:44:00.767 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie926061f8c6 ContainerID="7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-4c49p" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-eth0" Jul 1 08:44:00.805973 containerd[1721]: 2025-07-01 08:44:00.785 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-4c49p" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-eth0" Jul 1 08:44:00.805973 containerd[1721]: 2025-07-01 08:44:00.786 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-4c49p" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-eth0", GenerateName:"calico-apiserver-794ddc87fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"252bf1bb-5284-4a70-8a20-be1644a678b7", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794ddc87fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51", Pod:"calico-apiserver-794ddc87fb-4c49p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie926061f8c6", MAC:"56:08:28:39:23:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:44:00.805973 containerd[1721]: 2025-07-01 08:44:00.801 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-4c49p" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--4c49p-eth0" Jul 1 08:44:00.806905 systemd-networkd[1349]: calic3fd4e3b579: Gained IPv6LL Jul 1 08:44:00.812543 containerd[1721]: time="2025-07-01T08:44:00.812392099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l7f2m,Uid:cdf46a22-d014-4b57-97a1-328bba56303c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76\"" Jul 1 08:44:00.818404 containerd[1721]: time="2025-07-01T08:44:00.818360106Z" level=info msg="CreateContainer within sandbox \"d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 1 08:44:00.851022 systemd[1]: Started cri-containerd-8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83.scope - libcontainer container 8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83. Jul 1 08:44:00.853002 containerd[1721]: time="2025-07-01T08:44:00.852982019Z" level=info msg="Container 4762a715b97721dd77e76505e1f29a36c9e42df794e578a490b5b0735926807a: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:00.885337 containerd[1721]: time="2025-07-01T08:44:00.885275890Z" level=info msg="CreateContainer within sandbox \"d6d81033b0c5abc3850c4e6f42f6f512bc7b7195fec5c4673f70a6a7c4ec3c76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4762a715b97721dd77e76505e1f29a36c9e42df794e578a490b5b0735926807a\"" Jul 1 08:44:00.886827 containerd[1721]: time="2025-07-01T08:44:00.886805384Z" level=info msg="StartContainer for \"4762a715b97721dd77e76505e1f29a36c9e42df794e578a490b5b0735926807a\"" Jul 1 08:44:00.890490 containerd[1721]: time="2025-07-01T08:44:00.890118512Z" level=info msg="connecting to shim 4762a715b97721dd77e76505e1f29a36c9e42df794e578a490b5b0735926807a" address="unix:///run/containerd/s/aae32b20ce7478885a992cc061f200a0c1912abc57488654b731432ce12d8169" protocol=ttrpc version=3 Jul 1 08:44:00.897320 containerd[1721]: time="2025-07-01T08:44:00.897290225Z" level=info msg="connecting to shim 7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51" address="unix:///run/containerd/s/b02f68f11ea25e70b57b952335d29edcd64b0337f5ccdcbc023ae5cb4a017c69" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:44:00.934900 systemd[1]: Started cri-containerd-4762a715b97721dd77e76505e1f29a36c9e42df794e578a490b5b0735926807a.scope - libcontainer container 4762a715b97721dd77e76505e1f29a36c9e42df794e578a490b5b0735926807a. Jul 1 08:44:00.948067 systemd[1]: Started cri-containerd-7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51.scope - libcontainer container 7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51. Jul 1 08:44:01.015265 containerd[1721]: time="2025-07-01T08:44:01.015244486Z" level=info msg="StartContainer for \"4762a715b97721dd77e76505e1f29a36c9e42df794e578a490b5b0735926807a\" returns successfully" Jul 1 08:44:01.017970 containerd[1721]: time="2025-07-01T08:44:01.017900136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cs2cr,Uid:90063c43-e610-4281-a138-4d72595d6f99,Namespace:kube-system,Attempt:0,} returns sandbox id \"8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83\"" Jul 1 08:44:01.022513 containerd[1721]: time="2025-07-01T08:44:01.022455296Z" level=info msg="CreateContainer within sandbox \"8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 1 08:44:01.048005 containerd[1721]: time="2025-07-01T08:44:01.047780409Z" level=info msg="Container 0b1952f3c14ee1d220645c016e01dc9457f7f402c75e02e34efadd8a908eb4fa: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:01.069594 containerd[1721]: time="2025-07-01T08:44:01.069559077Z" level=info msg="CreateContainer within sandbox \"8155e303c3e0bc0f78acc01e92b28bc31f7ec113e15d703d72ef9d3de7447b83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b1952f3c14ee1d220645c016e01dc9457f7f402c75e02e34efadd8a908eb4fa\"" Jul 1 08:44:01.070432 containerd[1721]: time="2025-07-01T08:44:01.070393223Z" level=info msg="StartContainer for \"0b1952f3c14ee1d220645c016e01dc9457f7f402c75e02e34efadd8a908eb4fa\"" Jul 1 08:44:01.074010 containerd[1721]: time="2025-07-01T08:44:01.073942103Z" level=info msg="connecting to shim 0b1952f3c14ee1d220645c016e01dc9457f7f402c75e02e34efadd8a908eb4fa" address="unix:///run/containerd/s/0fee3d370e94c9890c0d75ce06c4dcc6b24180d8d4e196c79beff0a3a76f2909" protocol=ttrpc version=3 Jul 1 08:44:01.102001 systemd[1]: Started cri-containerd-0b1952f3c14ee1d220645c016e01dc9457f7f402c75e02e34efadd8a908eb4fa.scope - libcontainer container 0b1952f3c14ee1d220645c016e01dc9457f7f402c75e02e34efadd8a908eb4fa. Jul 1 08:44:01.107712 containerd[1721]: time="2025-07-01T08:44:01.107685991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794ddc87fb-4c49p,Uid:252bf1bb-5284-4a70-8a20-be1644a678b7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51\"" Jul 1 08:44:01.142932 containerd[1721]: time="2025-07-01T08:44:01.142897011Z" level=info msg="StartContainer for \"0b1952f3c14ee1d220645c016e01dc9457f7f402c75e02e34efadd8a908eb4fa\" returns successfully" Jul 1 08:44:01.239036 containerd[1721]: time="2025-07-01T08:44:01.239014139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794ddc87fb-xsjw6,Uid:20d55f92-2563-4989-a7d2-36b21d3eab8a,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:44:01.412114 kubelet[3174]: I0701 08:44:01.411208 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cs2cr" podStartSLOduration=42.411191382 podStartE2EDuration="42.411191382s" podCreationTimestamp="2025-07-01 08:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:44:01.408307288 +0000 UTC m=+47.253918381" watchObservedRunningTime="2025-07-01 08:44:01.411191382 +0000 UTC m=+47.256802473" Jul 1 08:44:01.413784 systemd-networkd[1349]: cali19996639062: Link UP Jul 1 08:44:01.414571 systemd-networkd[1349]: cali19996639062: Gained carrier Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.276 [INFO][5006] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-eth0 calico-apiserver-794ddc87fb- calico-apiserver 20d55f92-2563-4989-a7d2-36b21d3eab8a 808 0 2025-07-01 08:43:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:794ddc87fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-9999.9.9-s-875ad0e937 calico-apiserver-794ddc87fb-xsjw6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali19996639062 [] [] }} ContainerID="e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-xsjw6" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-" Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.276 [INFO][5006] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-xsjw6" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-eth0" Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.325 [INFO][5019] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" HandleID="k8s-pod-network.e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" Workload="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-eth0" Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.326 [INFO][5019] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" HandleID="k8s-pod-network.e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" Workload="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac280), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-9999.9.9-s-875ad0e937", "pod":"calico-apiserver-794ddc87fb-xsjw6", "timestamp":"2025-07-01 08:44:01.325831292 +0000 UTC"}, Hostname:"ci-9999.9.9-s-875ad0e937", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.326 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.326 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.326 [INFO][5019] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-s-875ad0e937' Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.336 [INFO][5019] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.344 [INFO][5019] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.349 [INFO][5019] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.352 [INFO][5019] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.357 [INFO][5019] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.358 [INFO][5019] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.360 [INFO][5019] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.370 [INFO][5019] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.392 [INFO][5019] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.135/26] block=192.168.71.128/26 handle="k8s-pod-network.e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.392 [INFO][5019] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.135/26] handle="k8s-pod-network.e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.392 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:44:01.440949 containerd[1721]: 2025-07-01 08:44:01.392 [INFO][5019] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.135/26] IPv6=[] ContainerID="e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" HandleID="k8s-pod-network.e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" Workload="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-eth0" Jul 1 08:44:01.441528 containerd[1721]: 2025-07-01 08:44:01.401 [INFO][5006] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-xsjw6" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-eth0", GenerateName:"calico-apiserver-794ddc87fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"20d55f92-2563-4989-a7d2-36b21d3eab8a", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794ddc87fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"", Pod:"calico-apiserver-794ddc87fb-xsjw6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19996639062", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:44:01.441528 containerd[1721]: 2025-07-01 08:44:01.402 [INFO][5006] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.135/32] ContainerID="e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-xsjw6" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-eth0" Jul 1 08:44:01.441528 containerd[1721]: 2025-07-01 08:44:01.403 [INFO][5006] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19996639062 ContainerID="e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-xsjw6" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-eth0" Jul 1 08:44:01.441528 containerd[1721]: 2025-07-01 08:44:01.415 [INFO][5006] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-xsjw6" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-eth0" Jul 1 08:44:01.441528 containerd[1721]: 2025-07-01 08:44:01.416 [INFO][5006] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-xsjw6" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-eth0", GenerateName:"calico-apiserver-794ddc87fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"20d55f92-2563-4989-a7d2-36b21d3eab8a", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794ddc87fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b", Pod:"calico-apiserver-794ddc87fb-xsjw6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19996639062", MAC:"42:d5:3b:a7:5e:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:44:01.441528 containerd[1721]: 2025-07-01 08:44:01.432 [INFO][5006] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" Namespace="calico-apiserver" Pod="calico-apiserver-794ddc87fb-xsjw6" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-calico--apiserver--794ddc87fb--xsjw6-eth0" Jul 1 08:44:01.448591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1839721302.mount: Deactivated successfully. Jul 1 08:44:01.536848 containerd[1721]: time="2025-07-01T08:44:01.536461146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:01.538801 containerd[1721]: time="2025-07-01T08:44:01.538065861Z" level=info msg="connecting to shim e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b" address="unix:///run/containerd/s/a560be45d4c00786f2a2f433fcab229157bf9fec6595d961304d00c0d2e1fe8e" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:44:01.539557 containerd[1721]: time="2025-07-01T08:44:01.539530843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 1 08:44:01.543427 containerd[1721]: time="2025-07-01T08:44:01.543305608Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:01.556768 containerd[1721]: time="2025-07-01T08:44:01.556250200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:01.556768 containerd[1721]: time="2025-07-01T08:44:01.556675617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.172200299s" Jul 1 08:44:01.556768 containerd[1721]: time="2025-07-01T08:44:01.556698728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 1 08:44:01.561115 containerd[1721]: time="2025-07-01T08:44:01.561097020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 1 08:44:01.561461 containerd[1721]: time="2025-07-01T08:44:01.561443965Z" level=info msg="CreateContainer within sandbox \"b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 1 08:44:01.578024 systemd[1]: Started cri-containerd-e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b.scope - libcontainer container e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b. Jul 1 08:44:01.585585 containerd[1721]: time="2025-07-01T08:44:01.585563839Z" level=info msg="Container addbafa76ae3a9f71499de7dbe6284b28dd8a05970389c0ef2be89c4a8fadb96: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:01.615468 containerd[1721]: time="2025-07-01T08:44:01.615439462Z" level=info msg="CreateContainer within sandbox \"b892561c6702e5c17e3a59dc57a9a569b4efeb536947233c1d7e0d93ab834dea\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"addbafa76ae3a9f71499de7dbe6284b28dd8a05970389c0ef2be89c4a8fadb96\"" Jul 1 08:44:01.616608 containerd[1721]: time="2025-07-01T08:44:01.616487813Z" level=info msg="StartContainer for \"addbafa76ae3a9f71499de7dbe6284b28dd8a05970389c0ef2be89c4a8fadb96\"" Jul 1 08:44:01.618392 containerd[1721]: time="2025-07-01T08:44:01.618345714Z" level=info msg="connecting to shim addbafa76ae3a9f71499de7dbe6284b28dd8a05970389c0ef2be89c4a8fadb96" address="unix:///run/containerd/s/274211aa8c80ba860cf91e44262351f9f991788ddd5d584ea50a1c8c87431ded" protocol=ttrpc version=3 Jul 1 08:44:01.635079 systemd[1]: Started cri-containerd-addbafa76ae3a9f71499de7dbe6284b28dd8a05970389c0ef2be89c4a8fadb96.scope - libcontainer container addbafa76ae3a9f71499de7dbe6284b28dd8a05970389c0ef2be89c4a8fadb96. Jul 1 08:44:01.706317 containerd[1721]: time="2025-07-01T08:44:01.706088687Z" level=info msg="StartContainer for \"addbafa76ae3a9f71499de7dbe6284b28dd8a05970389c0ef2be89c4a8fadb96\" returns successfully" Jul 1 08:44:01.712081 containerd[1721]: time="2025-07-01T08:44:01.712058817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794ddc87fb-xsjw6,Uid:20d55f92-2563-4989-a7d2-36b21d3eab8a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b\"" Jul 1 08:44:02.150903 systemd-networkd[1349]: calie926061f8c6: Gained IPv6LL Jul 1 08:44:02.239975 containerd[1721]: time="2025-07-01T08:44:02.239917938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cpd2x,Uid:921bd48b-6a52-4928-98d5-dc65a968d1c0,Namespace:calico-system,Attempt:0,}" Jul 1 08:44:02.278949 systemd-networkd[1349]: califb25b451c99: Gained IPv6LL Jul 1 08:44:02.325937 systemd-networkd[1349]: cali963367904a0: Link UP Jul 1 08:44:02.326047 systemd-networkd[1349]: cali963367904a0: Gained carrier Jul 1 08:44:02.336534 kubelet[3174]: I0701 08:44:02.336478 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-l7f2m" podStartSLOduration=43.336372144 podStartE2EDuration="43.336372144s" podCreationTimestamp="2025-07-01 08:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:44:01.473926155 +0000 UTC m=+47.319537249" watchObservedRunningTime="2025-07-01 08:44:02.336372144 +0000 UTC m=+48.181983236" Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.271 [INFO][5120] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-eth0 csi-node-driver- calico-system 921bd48b-6a52-4928-98d5-dc65a968d1c0 685 0 2025-07-01 08:43:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-9999.9.9-s-875ad0e937 csi-node-driver-cpd2x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali963367904a0 [] [] }} ContainerID="2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" Namespace="calico-system" Pod="csi-node-driver-cpd2x" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-" Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.271 [INFO][5120] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" Namespace="calico-system" Pod="csi-node-driver-cpd2x" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-eth0" Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.293 [INFO][5132] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" HandleID="k8s-pod-network.2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" Workload="ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-eth0" Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.293 [INFO][5132] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" HandleID="k8s-pod-network.2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" Workload="ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999.9.9-s-875ad0e937", "pod":"csi-node-driver-cpd2x", "timestamp":"2025-07-01 08:44:02.293795793 +0000 UTC"}, Hostname:"ci-9999.9.9-s-875ad0e937", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.293 [INFO][5132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.294 [INFO][5132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.294 [INFO][5132] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-s-875ad0e937' Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.298 [INFO][5132] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.302 [INFO][5132] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.305 [INFO][5132] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.306 [INFO][5132] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.307 [INFO][5132] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.307 [INFO][5132] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.308 [INFO][5132] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.314 [INFO][5132] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.322 [INFO][5132] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.136/26] block=192.168.71.128/26 handle="k8s-pod-network.2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.322 [INFO][5132] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.136/26] handle="k8s-pod-network.2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" host="ci-9999.9.9-s-875ad0e937" Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.322 [INFO][5132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:44:02.338584 containerd[1721]: 2025-07-01 08:44:02.322 [INFO][5132] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.136/26] IPv6=[] ContainerID="2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" HandleID="k8s-pod-network.2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" Workload="ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-eth0" Jul 1 08:44:02.340177 containerd[1721]: 2025-07-01 08:44:02.323 [INFO][5120] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" Namespace="calico-system" Pod="csi-node-driver-cpd2x" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"921bd48b-6a52-4928-98d5-dc65a968d1c0", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"", Pod:"csi-node-driver-cpd2x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali963367904a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:44:02.340177 containerd[1721]: 2025-07-01 08:44:02.323 [INFO][5120] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.136/32] ContainerID="2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" Namespace="calico-system" Pod="csi-node-driver-cpd2x" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-eth0" Jul 1 08:44:02.340177 containerd[1721]: 2025-07-01 08:44:02.323 [INFO][5120] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali963367904a0 ContainerID="2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" Namespace="calico-system" Pod="csi-node-driver-cpd2x" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-eth0" Jul 1 08:44:02.340177 containerd[1721]: 2025-07-01 08:44:02.324 [INFO][5120] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" Namespace="calico-system" Pod="csi-node-driver-cpd2x" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-eth0" Jul 1 08:44:02.340177 containerd[1721]: 2025-07-01 08:44:02.324 [INFO][5120] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" Namespace="calico-system" Pod="csi-node-driver-cpd2x" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"921bd48b-6a52-4928-98d5-dc65a968d1c0", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 43, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-s-875ad0e937", ContainerID:"2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e", Pod:"csi-node-driver-cpd2x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali963367904a0", MAC:"1e:cf:82:d2:d1:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:44:02.340177 containerd[1721]: 2025-07-01 08:44:02.336 [INFO][5120] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" Namespace="calico-system" Pod="csi-node-driver-cpd2x" WorkloadEndpoint="ci--9999.9.9--s--875ad0e937-k8s-csi--node--driver--cpd2x-eth0" Jul 1 08:44:02.342881 systemd-networkd[1349]: cali4a46752e57b: Gained IPv6LL Jul 1 08:44:02.382381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3417353163.mount: Deactivated successfully. Jul 1 08:44:02.393792 containerd[1721]: time="2025-07-01T08:44:02.393745774Z" level=info msg="connecting to shim 2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e" address="unix:///run/containerd/s/02135343f6247f1edd143adc212f555328f4acdf3ad58eb0ad933b85ca3c3d35" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:44:02.420865 systemd[1]: Started cri-containerd-2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e.scope - libcontainer container 2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e. Jul 1 08:44:02.463686 containerd[1721]: time="2025-07-01T08:44:02.463621018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cpd2x,Uid:921bd48b-6a52-4928-98d5-dc65a968d1c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e\"" Jul 1 08:44:02.662895 systemd-networkd[1349]: calid8a65c00437: Gained IPv6LL Jul 1 08:44:02.854938 systemd-networkd[1349]: cali19996639062: Gained IPv6LL Jul 1 08:44:03.792446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2722300829.mount: Deactivated successfully. Jul 1 08:44:03.942919 systemd-networkd[1349]: cali963367904a0: Gained IPv6LL Jul 1 08:44:04.197703 containerd[1721]: time="2025-07-01T08:44:04.197616260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:04.205426 containerd[1721]: time="2025-07-01T08:44:04.205389759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 1 08:44:04.208596 containerd[1721]: time="2025-07-01T08:44:04.208565648Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:04.212536 containerd[1721]: time="2025-07-01T08:44:04.212495222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:04.213076 containerd[1721]: time="2025-07-01T08:44:04.212971302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.651777228s" Jul 1 08:44:04.213076 containerd[1721]: time="2025-07-01T08:44:04.212999880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 1 08:44:04.213846 containerd[1721]: time="2025-07-01T08:44:04.213821560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 1 08:44:04.215137 containerd[1721]: time="2025-07-01T08:44:04.215102895Z" level=info msg="CreateContainer within sandbox \"8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 1 08:44:04.243216 containerd[1721]: time="2025-07-01T08:44:04.240729686Z" level=info msg="Container 026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:04.256673 containerd[1721]: time="2025-07-01T08:44:04.256650626Z" level=info msg="CreateContainer within sandbox \"8f8798755dc92d2db8195476c728617e8a70518d344d6c073a118e92fa0d75bd\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab\"" Jul 1 08:44:04.257141 containerd[1721]: time="2025-07-01T08:44:04.257079902Z" level=info msg="StartContainer for \"026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab\"" Jul 1 08:44:04.258473 containerd[1721]: time="2025-07-01T08:44:04.258373493Z" level=info msg="connecting to shim 026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab" address="unix:///run/containerd/s/70ea202cd2ea48ec1e0bba1f4dad0cd10275a4f38a40e6c67182e7b7bd9bbab3" protocol=ttrpc version=3 Jul 1 08:44:04.277095 systemd[1]: Started cri-containerd-026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab.scope - libcontainer container 026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab. Jul 1 08:44:04.320110 containerd[1721]: time="2025-07-01T08:44:04.320028764Z" level=info msg="StartContainer for \"026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab\" returns successfully" Jul 1 08:44:04.432612 kubelet[3174]: I0701 08:44:04.432561 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-td68f" podStartSLOduration=27.656116297 podStartE2EDuration="32.43254567s" podCreationTimestamp="2025-07-01 08:43:32 +0000 UTC" firstStartedPulling="2025-07-01 08:43:59.437282075 +0000 UTC m=+45.282893166" lastFinishedPulling="2025-07-01 08:44:04.21371145 +0000 UTC m=+50.059322539" observedRunningTime="2025-07-01 08:44:04.430369852 +0000 UTC m=+50.275980943" watchObservedRunningTime="2025-07-01 08:44:04.43254567 +0000 UTC m=+50.278156748" Jul 1 08:44:04.433189 kubelet[3174]: I0701 08:44:04.432679 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6bc9546454-d4ztj" podStartSLOduration=3.770925676 podStartE2EDuration="8.432673516s" podCreationTimestamp="2025-07-01 08:43:56 +0000 UTC" firstStartedPulling="2025-07-01 08:43:56.897475027 +0000 UTC m=+42.743086103" lastFinishedPulling="2025-07-01 08:44:01.559222859 +0000 UTC m=+47.404833943" observedRunningTime="2025-07-01 08:44:02.424497915 +0000 UTC m=+48.270109006" watchObservedRunningTime="2025-07-01 08:44:04.432673516 +0000 UTC m=+50.278284649" Jul 1 08:44:04.490794 containerd[1721]: time="2025-07-01T08:44:04.490675236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab\" id:\"0e5addc71df9d94e0da6a08a8e8b253b610d9915ff47d3f261a80f6774d681b8\" pid:5261 exit_status:1 exited_at:{seconds:1751359444 nanos:490040444}" Jul 1 08:44:05.479404 containerd[1721]: time="2025-07-01T08:44:05.479361835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab\" id:\"db35832784727e6cf6b11b9c29f8ce7ee6fdb828c984ae9164c8b3b4da9e80dc\" pid:5286 exit_status:1 exited_at:{seconds:1751359445 nanos:479122526}" Jul 1 08:44:09.771947 containerd[1721]: time="2025-07-01T08:44:09.771888705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:09.774124 containerd[1721]: time="2025-07-01T08:44:09.774089873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 1 08:44:09.776853 containerd[1721]: time="2025-07-01T08:44:09.776821529Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:09.781096 containerd[1721]: time="2025-07-01T08:44:09.781053199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:09.781505 containerd[1721]: time="2025-07-01T08:44:09.781380371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.567462514s" Jul 1 08:44:09.781505 containerd[1721]: time="2025-07-01T08:44:09.781407557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 1 08:44:09.784546 containerd[1721]: time="2025-07-01T08:44:09.783358722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 1 08:44:09.793967 containerd[1721]: time="2025-07-01T08:44:09.793942916Z" level=info msg="CreateContainer within sandbox \"e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 1 08:44:09.811542 containerd[1721]: time="2025-07-01T08:44:09.811518453Z" level=info msg="Container e6fabd976dd2d180ab44fd01e96b1d226bf6228da667c15d44eff32837344113: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:09.827469 containerd[1721]: time="2025-07-01T08:44:09.827448486Z" level=info msg="CreateContainer within sandbox \"e6fd4b044e64d902f0676b273d435e6ea87956961c323b6c8df41b9a9b2b896f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e6fabd976dd2d180ab44fd01e96b1d226bf6228da667c15d44eff32837344113\"" Jul 1 08:44:09.827990 containerd[1721]: time="2025-07-01T08:44:09.827897135Z" level=info msg="StartContainer for \"e6fabd976dd2d180ab44fd01e96b1d226bf6228da667c15d44eff32837344113\"" Jul 1 08:44:09.829066 containerd[1721]: time="2025-07-01T08:44:09.829029562Z" level=info msg="connecting to shim e6fabd976dd2d180ab44fd01e96b1d226bf6228da667c15d44eff32837344113" address="unix:///run/containerd/s/4285149128f920ae5b5d6b199fb1b29e6fa38c8303959714073fb07291f7f007" protocol=ttrpc version=3 Jul 1 08:44:09.847870 systemd[1]: Started cri-containerd-e6fabd976dd2d180ab44fd01e96b1d226bf6228da667c15d44eff32837344113.scope - libcontainer container e6fabd976dd2d180ab44fd01e96b1d226bf6228da667c15d44eff32837344113. Jul 1 08:44:09.889048 containerd[1721]: time="2025-07-01T08:44:09.889008489Z" level=info msg="StartContainer for \"e6fabd976dd2d180ab44fd01e96b1d226bf6228da667c15d44eff32837344113\" returns successfully" Jul 1 08:44:10.467013 containerd[1721]: time="2025-07-01T08:44:10.466970860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6fabd976dd2d180ab44fd01e96b1d226bf6228da667c15d44eff32837344113\" id:\"70b13dec8a2a8b587088613d30005504e99641a2b28a55bd2e71ebaf0bda8319\" pid:5367 exited_at:{seconds:1751359450 nanos:466730472}" Jul 1 08:44:10.478594 kubelet[3174]: I0701 08:44:10.478448 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cbf49894c-dqk7k" podStartSLOduration=29.388783853 podStartE2EDuration="38.478431586s" podCreationTimestamp="2025-07-01 08:43:32 +0000 UTC" firstStartedPulling="2025-07-01 08:44:00.692554166 +0000 UTC m=+46.538165258" lastFinishedPulling="2025-07-01 08:44:09.7822019 +0000 UTC m=+55.627812991" observedRunningTime="2025-07-01 08:44:10.443573869 +0000 UTC m=+56.289184959" watchObservedRunningTime="2025-07-01 08:44:10.478431586 +0000 UTC m=+56.324042674" Jul 1 08:44:14.207048 containerd[1721]: time="2025-07-01T08:44:14.207005155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:14.209340 containerd[1721]: time="2025-07-01T08:44:14.209304415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 1 08:44:14.211892 containerd[1721]: time="2025-07-01T08:44:14.211852333Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:14.215419 containerd[1721]: time="2025-07-01T08:44:14.215367565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:14.215835 containerd[1721]: time="2025-07-01T08:44:14.215692007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.432301801s" Jul 1 08:44:14.215835 containerd[1721]: time="2025-07-01T08:44:14.215719553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 1 08:44:14.216896 containerd[1721]: time="2025-07-01T08:44:14.216546624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 1 08:44:14.218120 containerd[1721]: time="2025-07-01T08:44:14.217562218Z" level=info msg="CreateContainer within sandbox \"7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 1 08:44:14.244902 containerd[1721]: time="2025-07-01T08:44:14.244881009Z" level=info msg="Container 168c18883ee64b8c6dd7c15260bd27664b18964c78550c243d1122a61634582f: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:14.248597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1255257048.mount: Deactivated successfully. Jul 1 08:44:14.261207 containerd[1721]: time="2025-07-01T08:44:14.261174090Z" level=info msg="CreateContainer within sandbox \"7bcb1a0d01f6a3575d26e1ee3231ea1c86aafd5e85e5385fc843054b812b3c51\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"168c18883ee64b8c6dd7c15260bd27664b18964c78550c243d1122a61634582f\"" Jul 1 08:44:14.262771 containerd[1721]: time="2025-07-01T08:44:14.261923132Z" level=info msg="StartContainer for \"168c18883ee64b8c6dd7c15260bd27664b18964c78550c243d1122a61634582f\"" Jul 1 08:44:14.263833 containerd[1721]: time="2025-07-01T08:44:14.263800833Z" level=info msg="connecting to shim 168c18883ee64b8c6dd7c15260bd27664b18964c78550c243d1122a61634582f" address="unix:///run/containerd/s/b02f68f11ea25e70b57b952335d29edcd64b0337f5ccdcbc023ae5cb4a017c69" protocol=ttrpc version=3 Jul 1 08:44:14.285929 systemd[1]: Started cri-containerd-168c18883ee64b8c6dd7c15260bd27664b18964c78550c243d1122a61634582f.scope - libcontainer container 168c18883ee64b8c6dd7c15260bd27664b18964c78550c243d1122a61634582f. Jul 1 08:44:14.329486 containerd[1721]: time="2025-07-01T08:44:14.329464157Z" level=info msg="StartContainer for \"168c18883ee64b8c6dd7c15260bd27664b18964c78550c243d1122a61634582f\" returns successfully" Jul 1 08:44:14.575392 containerd[1721]: time="2025-07-01T08:44:14.575353134Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:14.578768 containerd[1721]: time="2025-07-01T08:44:14.578411320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 1 08:44:14.579709 containerd[1721]: time="2025-07-01T08:44:14.579686389Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 363.114711ms" Jul 1 08:44:14.579782 containerd[1721]: time="2025-07-01T08:44:14.579772854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 1 08:44:14.581680 containerd[1721]: time="2025-07-01T08:44:14.581658611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 1 08:44:14.583785 containerd[1721]: time="2025-07-01T08:44:14.583355748Z" level=info msg="CreateContainer within sandbox \"e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 1 08:44:14.600181 containerd[1721]: time="2025-07-01T08:44:14.600157535Z" level=info msg="Container 913d4d1838607bb569e687b7444754a15a5f082d66d8c1da6f35bbf7922f6453: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:14.618009 containerd[1721]: time="2025-07-01T08:44:14.617988834Z" level=info msg="CreateContainer within sandbox \"e89dbafd2033595b5a0f671de87532121f8b246ef6ed617870768c402b9f065b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"913d4d1838607bb569e687b7444754a15a5f082d66d8c1da6f35bbf7922f6453\"" Jul 1 08:44:14.618547 containerd[1721]: time="2025-07-01T08:44:14.618532499Z" level=info msg="StartContainer for \"913d4d1838607bb569e687b7444754a15a5f082d66d8c1da6f35bbf7922f6453\"" Jul 1 08:44:14.620545 containerd[1721]: time="2025-07-01T08:44:14.620470742Z" level=info msg="connecting to shim 913d4d1838607bb569e687b7444754a15a5f082d66d8c1da6f35bbf7922f6453" address="unix:///run/containerd/s/a560be45d4c00786f2a2f433fcab229157bf9fec6595d961304d00c0d2e1fe8e" protocol=ttrpc version=3 Jul 1 08:44:14.636023 systemd[1]: Started cri-containerd-913d4d1838607bb569e687b7444754a15a5f082d66d8c1da6f35bbf7922f6453.scope - libcontainer container 913d4d1838607bb569e687b7444754a15a5f082d66d8c1da6f35bbf7922f6453. Jul 1 08:44:14.681559 containerd[1721]: time="2025-07-01T08:44:14.681537874Z" level=info msg="StartContainer for \"913d4d1838607bb569e687b7444754a15a5f082d66d8c1da6f35bbf7922f6453\" returns successfully" Jul 1 08:44:15.453804 kubelet[3174]: I0701 08:44:15.453492 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-794ddc87fb-4c49p" podStartSLOduration=32.3464657 podStartE2EDuration="45.453472885s" podCreationTimestamp="2025-07-01 08:43:30 +0000 UTC" firstStartedPulling="2025-07-01 08:44:01.109418892 +0000 UTC m=+46.955029978" lastFinishedPulling="2025-07-01 08:44:14.216426072 +0000 UTC m=+60.062037163" observedRunningTime="2025-07-01 08:44:14.450035138 +0000 UTC m=+60.295646226" watchObservedRunningTime="2025-07-01 08:44:15.453472885 +0000 UTC m=+61.299083977" Jul 1 08:44:15.842766 kubelet[3174]: I0701 08:44:15.842623 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-794ddc87fb-xsjw6" podStartSLOduration=32.975008336 podStartE2EDuration="45.842605731s" podCreationTimestamp="2025-07-01 08:43:30 +0000 UTC" firstStartedPulling="2025-07-01 08:44:01.712904621 +0000 UTC m=+47.558515706" lastFinishedPulling="2025-07-01 08:44:14.580502009 +0000 UTC m=+60.426113101" observedRunningTime="2025-07-01 08:44:15.454444072 +0000 UTC m=+61.300055167" watchObservedRunningTime="2025-07-01 08:44:15.842605731 +0000 UTC m=+61.688216823" Jul 1 08:44:16.038584 containerd[1721]: time="2025-07-01T08:44:16.038021302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:16.041668 containerd[1721]: time="2025-07-01T08:44:16.041622122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 1 08:44:16.044918 containerd[1721]: time="2025-07-01T08:44:16.044892080Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:16.050133 containerd[1721]: time="2025-07-01T08:44:16.050096161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:16.052424 containerd[1721]: time="2025-07-01T08:44:16.052304232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.47034946s" Jul 1 08:44:16.052424 containerd[1721]: time="2025-07-01T08:44:16.052331885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 1 08:44:16.055070 containerd[1721]: time="2025-07-01T08:44:16.054993724Z" level=info msg="CreateContainer within sandbox \"2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 1 08:44:16.074435 containerd[1721]: time="2025-07-01T08:44:16.074414635Z" level=info msg="Container 7a9e0c9ee057112d06e370967c4a94699702ee7067a7f45a075b7edb0cfcec38: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:16.109680 containerd[1721]: time="2025-07-01T08:44:16.109321061Z" level=info msg="CreateContainer within sandbox \"2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7a9e0c9ee057112d06e370967c4a94699702ee7067a7f45a075b7edb0cfcec38\"" Jul 1 08:44:16.111432 containerd[1721]: time="2025-07-01T08:44:16.110863994Z" level=info msg="StartContainer for \"7a9e0c9ee057112d06e370967c4a94699702ee7067a7f45a075b7edb0cfcec38\"" Jul 1 08:44:16.112874 containerd[1721]: time="2025-07-01T08:44:16.112814329Z" level=info msg="connecting to shim 7a9e0c9ee057112d06e370967c4a94699702ee7067a7f45a075b7edb0cfcec38" address="unix:///run/containerd/s/02135343f6247f1edd143adc212f555328f4acdf3ad58eb0ad933b85ca3c3d35" protocol=ttrpc version=3 Jul 1 08:44:16.140141 systemd[1]: Started cri-containerd-7a9e0c9ee057112d06e370967c4a94699702ee7067a7f45a075b7edb0cfcec38.scope - libcontainer container 7a9e0c9ee057112d06e370967c4a94699702ee7067a7f45a075b7edb0cfcec38. Jul 1 08:44:16.182325 containerd[1721]: time="2025-07-01T08:44:16.182305241Z" level=info msg="StartContainer for \"7a9e0c9ee057112d06e370967c4a94699702ee7067a7f45a075b7edb0cfcec38\" returns successfully" Jul 1 08:44:16.183356 containerd[1721]: time="2025-07-01T08:44:16.183338087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 1 08:44:16.445684 kubelet[3174]: I0701 08:44:16.445311 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:44:17.687005 containerd[1721]: time="2025-07-01T08:44:17.686858096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:17.690183 containerd[1721]: time="2025-07-01T08:44:17.690150107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 1 08:44:17.692772 containerd[1721]: time="2025-07-01T08:44:17.692729938Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:17.697137 containerd[1721]: time="2025-07-01T08:44:17.697091263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:17.697945 containerd[1721]: time="2025-07-01T08:44:17.697569401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.51420502s" Jul 1 08:44:17.697945 containerd[1721]: time="2025-07-01T08:44:17.697597717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 1 08:44:17.700977 containerd[1721]: time="2025-07-01T08:44:17.700932136Z" level=info msg="CreateContainer within sandbox \"2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 1 08:44:17.717413 containerd[1721]: time="2025-07-01T08:44:17.717220727Z" level=info msg="Container 31cb147e856b56b98149024f5cf580c5930760647d73d25447c686ae963c5d76: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:17.735898 containerd[1721]: time="2025-07-01T08:44:17.735869023Z" level=info msg="CreateContainer within sandbox \"2b40f25b941c83924ae8bf91661cc39a7223ee5e82c601b310fc8dc30c06ee0e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"31cb147e856b56b98149024f5cf580c5930760647d73d25447c686ae963c5d76\"" Jul 1 08:44:17.736518 containerd[1721]: time="2025-07-01T08:44:17.736495337Z" level=info msg="StartContainer for \"31cb147e856b56b98149024f5cf580c5930760647d73d25447c686ae963c5d76\"" Jul 1 08:44:17.738855 containerd[1721]: time="2025-07-01T08:44:17.738781565Z" level=info msg="connecting to shim 31cb147e856b56b98149024f5cf580c5930760647d73d25447c686ae963c5d76" address="unix:///run/containerd/s/02135343f6247f1edd143adc212f555328f4acdf3ad58eb0ad933b85ca3c3d35" protocol=ttrpc version=3 Jul 1 08:44:17.764972 systemd[1]: Started cri-containerd-31cb147e856b56b98149024f5cf580c5930760647d73d25447c686ae963c5d76.scope - libcontainer container 31cb147e856b56b98149024f5cf580c5930760647d73d25447c686ae963c5d76. Jul 1 08:44:17.816164 containerd[1721]: time="2025-07-01T08:44:17.815603567Z" level=info msg="StartContainer for \"31cb147e856b56b98149024f5cf580c5930760647d73d25447c686ae963c5d76\" returns successfully" Jul 1 08:44:18.302028 kubelet[3174]: I0701 08:44:18.301981 3174 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 1 08:44:18.302028 kubelet[3174]: I0701 08:44:18.302035 3174 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 1 08:44:18.463469 kubelet[3174]: I0701 08:44:18.463415 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cpd2x" podStartSLOduration=31.229202787 podStartE2EDuration="46.463374733s" podCreationTimestamp="2025-07-01 08:43:32 +0000 UTC" firstStartedPulling="2025-07-01 08:44:02.464705677 +0000 UTC m=+48.310316770" lastFinishedPulling="2025-07-01 08:44:17.698877631 +0000 UTC m=+63.544488716" observedRunningTime="2025-07-01 08:44:18.461957754 +0000 UTC m=+64.307568845" watchObservedRunningTime="2025-07-01 08:44:18.463374733 +0000 UTC m=+64.308985823" Jul 1 08:44:26.402109 containerd[1721]: time="2025-07-01T08:44:26.402053237Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e75f946df149813a89e15f6088f01d8f975be547efccff59564f7d7eb54a317\" id:\"8daf6584d49213ce1ff8388641d2f3ccf9eece1591c4e9c5c9649487256f7170\" pid:5555 exited_at:{seconds:1751359466 nanos:401778266}" Jul 1 08:44:26.462218 containerd[1721]: time="2025-07-01T08:44:26.462184498Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e75f946df149813a89e15f6088f01d8f975be547efccff59564f7d7eb54a317\" id:\"d124412e47be1c0be26c293d4f8719c142f39f45e5feaabf5b15bfba8cf4e7f7\" pid:5579 exited_at:{seconds:1751359466 nanos:461288267}" Jul 1 08:44:33.351948 containerd[1721]: time="2025-07-01T08:44:33.351910593Z" level=info msg="TaskExit event in podsandbox handler container_id:\"026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab\" id:\"1d0183a7ceb28282dd9b38688efac5f002b2149e097a639b304c0792401a4ef6\" pid:5604 exited_at:{seconds:1751359473 nanos:351601794}" Jul 1 08:44:34.185975 systemd[1]: Started sshd@7-10.200.8.13:22-10.200.16.10:41710.service - OpenSSH per-connection server daemon (10.200.16.10:41710). Jul 1 08:44:34.836055 sshd[5619]: Accepted publickey for core from 10.200.16.10 port 41710 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:44:34.837959 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:44:34.843972 systemd-logind[1700]: New session 10 of user core. Jul 1 08:44:34.848904 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 1 08:44:35.335311 sshd[5622]: Connection closed by 10.200.16.10 port 41710 Jul 1 08:44:35.335795 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Jul 1 08:44:35.339081 systemd[1]: sshd@7-10.200.8.13:22-10.200.16.10:41710.service: Deactivated successfully. Jul 1 08:44:35.341010 systemd[1]: session-10.scope: Deactivated successfully. Jul 1 08:44:35.342707 systemd-logind[1700]: Session 10 logged out. Waiting for processes to exit. Jul 1 08:44:35.343918 systemd-logind[1700]: Removed session 10. Jul 1 08:44:35.479591 containerd[1721]: time="2025-07-01T08:44:35.479545328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab\" id:\"0c0fdb42011b3dc1aec136261786d5ac6eed01b619e270f823c7dbd82b14b768\" pid:5646 exited_at:{seconds:1751359475 nanos:479342068}" Jul 1 08:44:36.071148 kubelet[3174]: I0701 08:44:36.070821 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:44:40.448583 systemd[1]: Started sshd@8-10.200.8.13:22-10.200.16.10:53652.service - OpenSSH per-connection server daemon (10.200.16.10:53652). Jul 1 08:44:40.471265 containerd[1721]: time="2025-07-01T08:44:40.471229902Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6fabd976dd2d180ab44fd01e96b1d226bf6228da667c15d44eff32837344113\" id:\"2b32fc92b9ad3409da15f9523fae2a23a53bf74239333410342fbd936e9901fb\" pid:5682 exited_at:{seconds:1751359480 nanos:470366948}" Jul 1 08:44:41.075623 sshd[5687]: Accepted publickey for core from 10.200.16.10 port 53652 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:44:41.077609 sshd-session[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:44:41.081472 systemd-logind[1700]: New session 11 of user core. Jul 1 08:44:41.084920 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 1 08:44:41.594770 sshd[5693]: Connection closed by 10.200.16.10 port 53652 Jul 1 08:44:41.596934 sshd-session[5687]: pam_unix(sshd:session): session closed for user core Jul 1 08:44:41.601114 systemd-logind[1700]: Session 11 logged out. Waiting for processes to exit. Jul 1 08:44:41.601608 systemd[1]: sshd@8-10.200.8.13:22-10.200.16.10:53652.service: Deactivated successfully. Jul 1 08:44:41.605455 systemd[1]: session-11.scope: Deactivated successfully. Jul 1 08:44:41.609513 systemd-logind[1700]: Removed session 11. Jul 1 08:44:46.705672 systemd[1]: Started sshd@9-10.200.8.13:22-10.200.16.10:53666.service - OpenSSH per-connection server daemon (10.200.16.10:53666). Jul 1 08:44:47.333793 sshd[5708]: Accepted publickey for core from 10.200.16.10 port 53666 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:44:47.334822 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:44:47.339583 systemd-logind[1700]: New session 12 of user core. Jul 1 08:44:47.346882 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 1 08:44:47.892773 sshd[5711]: Connection closed by 10.200.16.10 port 53666 Jul 1 08:44:47.893544 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Jul 1 08:44:47.896496 systemd[1]: sshd@9-10.200.8.13:22-10.200.16.10:53666.service: Deactivated successfully. Jul 1 08:44:47.898442 systemd[1]: session-12.scope: Deactivated successfully. Jul 1 08:44:47.900433 systemd-logind[1700]: Session 12 logged out. Waiting for processes to exit. Jul 1 08:44:47.905460 systemd-logind[1700]: Removed session 12. Jul 1 08:44:48.008705 systemd[1]: Started sshd@10-10.200.8.13:22-10.200.16.10:53668.service - OpenSSH per-connection server daemon (10.200.16.10:53668). Jul 1 08:44:48.639816 sshd[5724]: Accepted publickey for core from 10.200.16.10 port 53668 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:44:48.642996 sshd-session[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:44:48.650319 systemd-logind[1700]: New session 13 of user core. Jul 1 08:44:48.657908 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 1 08:44:49.169738 sshd[5727]: Connection closed by 10.200.16.10 port 53668 Jul 1 08:44:49.170251 sshd-session[5724]: pam_unix(sshd:session): session closed for user core Jul 1 08:44:49.173265 systemd[1]: sshd@10-10.200.8.13:22-10.200.16.10:53668.service: Deactivated successfully. Jul 1 08:44:49.174794 systemd[1]: session-13.scope: Deactivated successfully. Jul 1 08:44:49.175461 systemd-logind[1700]: Session 13 logged out. Waiting for processes to exit. Jul 1 08:44:49.176673 systemd-logind[1700]: Removed session 13. Jul 1 08:44:49.284995 systemd[1]: Started sshd@11-10.200.8.13:22-10.200.16.10:53672.service - OpenSSH per-connection server daemon (10.200.16.10:53672). Jul 1 08:44:49.448979 containerd[1721]: time="2025-07-01T08:44:49.448703554Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6fabd976dd2d180ab44fd01e96b1d226bf6228da667c15d44eff32837344113\" id:\"917e95ad7f40bfa01e17fad4ca89fb959019b352e69672e96b3d92c92ee6dadc\" pid:5752 exited_at:{seconds:1751359489 nanos:447425342}" Jul 1 08:44:49.928110 sshd[5737]: Accepted publickey for core from 10.200.16.10 port 53672 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:44:49.929126 sshd-session[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:44:49.932798 systemd-logind[1700]: New session 14 of user core. Jul 1 08:44:49.937927 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 1 08:44:50.424641 sshd[5760]: Connection closed by 10.200.16.10 port 53672 Jul 1 08:44:50.425162 sshd-session[5737]: pam_unix(sshd:session): session closed for user core Jul 1 08:44:50.428129 systemd[1]: sshd@11-10.200.8.13:22-10.200.16.10:53672.service: Deactivated successfully. Jul 1 08:44:50.429831 systemd[1]: session-14.scope: Deactivated successfully. Jul 1 08:44:50.430521 systemd-logind[1700]: Session 14 logged out. Waiting for processes to exit. Jul 1 08:44:50.431550 systemd-logind[1700]: Removed session 14. Jul 1 08:44:55.540974 systemd[1]: Started sshd@12-10.200.8.13:22-10.200.16.10:43670.service - OpenSSH per-connection server daemon (10.200.16.10:43670). Jul 1 08:44:56.188250 sshd[5778]: Accepted publickey for core from 10.200.16.10 port 43670 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:44:56.189241 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:44:56.193220 systemd-logind[1700]: New session 15 of user core. Jul 1 08:44:56.195918 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 1 08:44:56.462381 containerd[1721]: time="2025-07-01T08:44:56.462214443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e75f946df149813a89e15f6088f01d8f975be547efccff59564f7d7eb54a317\" id:\"3333778e0a755ac3a9ba39cc98dcb3a4be50c5683940b6600cdb4292efe3fe06\" pid:5794 exited_at:{seconds:1751359496 nanos:461787452}" Jul 1 08:44:56.674761 sshd[5781]: Connection closed by 10.200.16.10 port 43670 Jul 1 08:44:56.675212 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Jul 1 08:44:56.678325 systemd[1]: sshd@12-10.200.8.13:22-10.200.16.10:43670.service: Deactivated successfully. Jul 1 08:44:56.680053 systemd[1]: session-15.scope: Deactivated successfully. Jul 1 08:44:56.680713 systemd-logind[1700]: Session 15 logged out. Waiting for processes to exit. Jul 1 08:44:56.681657 systemd-logind[1700]: Removed session 15. Jul 1 08:45:01.792985 systemd[1]: Started sshd@13-10.200.8.13:22-10.200.16.10:51706.service - OpenSSH per-connection server daemon (10.200.16.10:51706). Jul 1 08:45:02.439552 sshd[5817]: Accepted publickey for core from 10.200.16.10 port 51706 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:45:02.441300 sshd-session[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:02.446478 systemd-logind[1700]: New session 16 of user core. Jul 1 08:45:02.452890 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 1 08:45:02.939826 sshd[5820]: Connection closed by 10.200.16.10 port 51706 Jul 1 08:45:02.940369 sshd-session[5817]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:02.944503 systemd[1]: sshd@13-10.200.8.13:22-10.200.16.10:51706.service: Deactivated successfully. Jul 1 08:45:02.946204 systemd[1]: session-16.scope: Deactivated successfully. Jul 1 08:45:02.946834 systemd-logind[1700]: Session 16 logged out. Waiting for processes to exit. Jul 1 08:45:02.947957 systemd-logind[1700]: Removed session 16. Jul 1 08:45:05.532425 containerd[1721]: time="2025-07-01T08:45:05.532361683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab\" id:\"fe3b2e5a842cdeb9f2ca1bdfa0f7a45fdf277759171e14da97a30f5e88b89ff5\" pid:5844 exited_at:{seconds:1751359505 nanos:532113813}" Jul 1 08:45:08.059324 systemd[1]: Started sshd@14-10.200.8.13:22-10.200.16.10:51718.service - OpenSSH per-connection server daemon (10.200.16.10:51718). Jul 1 08:45:08.690941 sshd[5854]: Accepted publickey for core from 10.200.16.10 port 51718 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:45:08.692929 sshd-session[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:08.698808 systemd-logind[1700]: New session 17 of user core. Jul 1 08:45:08.704961 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 1 08:45:09.188173 sshd[5857]: Connection closed by 10.200.16.10 port 51718 Jul 1 08:45:09.188436 sshd-session[5854]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:09.191089 systemd[1]: sshd@14-10.200.8.13:22-10.200.16.10:51718.service: Deactivated successfully. Jul 1 08:45:09.192799 systemd[1]: session-17.scope: Deactivated successfully. Jul 1 08:45:09.194464 systemd-logind[1700]: Session 17 logged out. Waiting for processes to exit. Jul 1 08:45:09.195353 systemd-logind[1700]: Removed session 17. Jul 1 08:45:09.302444 systemd[1]: Started sshd@15-10.200.8.13:22-10.200.16.10:51726.service - OpenSSH per-connection server daemon (10.200.16.10:51726). Jul 1 08:45:09.928919 sshd[5871]: Accepted publickey for core from 10.200.16.10 port 51726 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:45:09.929945 sshd-session[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:09.933660 systemd-logind[1700]: New session 18 of user core. Jul 1 08:45:09.936938 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 1 08:45:10.497785 containerd[1721]: time="2025-07-01T08:45:10.497034782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6fabd976dd2d180ab44fd01e96b1d226bf6228da667c15d44eff32837344113\" id:\"bd9232cdf0923257bcce65158b032ee7902c1c78e92725beba048b122a8b0196\" pid:5893 exited_at:{seconds:1751359510 nanos:496540734}" Jul 1 08:45:10.510224 sshd[5874]: Connection closed by 10.200.16.10 port 51726 Jul 1 08:45:10.510606 sshd-session[5871]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:10.514442 systemd-logind[1700]: Session 18 logged out. Waiting for processes to exit. Jul 1 08:45:10.515162 systemd[1]: sshd@15-10.200.8.13:22-10.200.16.10:51726.service: Deactivated successfully. Jul 1 08:45:10.517601 systemd[1]: session-18.scope: Deactivated successfully. Jul 1 08:45:10.520775 systemd-logind[1700]: Removed session 18. Jul 1 08:45:10.623050 systemd[1]: Started sshd@16-10.200.8.13:22-10.200.16.10:36938.service - OpenSSH per-connection server daemon (10.200.16.10:36938). Jul 1 08:45:11.268455 sshd[5906]: Accepted publickey for core from 10.200.16.10 port 36938 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:45:11.269473 sshd-session[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:11.273678 systemd-logind[1700]: New session 19 of user core. Jul 1 08:45:11.277909 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 1 08:45:12.577383 sshd[5909]: Connection closed by 10.200.16.10 port 36938 Jul 1 08:45:12.577853 sshd-session[5906]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:12.581064 systemd[1]: sshd@16-10.200.8.13:22-10.200.16.10:36938.service: Deactivated successfully. Jul 1 08:45:12.582710 systemd[1]: session-19.scope: Deactivated successfully. Jul 1 08:45:12.583530 systemd-logind[1700]: Session 19 logged out. Waiting for processes to exit. Jul 1 08:45:12.584627 systemd-logind[1700]: Removed session 19. Jul 1 08:45:12.691680 systemd[1]: Started sshd@17-10.200.8.13:22-10.200.16.10:36940.service - OpenSSH per-connection server daemon (10.200.16.10:36940). Jul 1 08:45:13.324768 sshd[5927]: Accepted publickey for core from 10.200.16.10 port 36940 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:45:13.325861 sshd-session[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:13.331148 systemd-logind[1700]: New session 20 of user core. Jul 1 08:45:13.335913 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 1 08:45:13.891162 sshd[5930]: Connection closed by 10.200.16.10 port 36940 Jul 1 08:45:13.891775 sshd-session[5927]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:13.895302 systemd[1]: sshd@17-10.200.8.13:22-10.200.16.10:36940.service: Deactivated successfully. Jul 1 08:45:13.897174 systemd[1]: session-20.scope: Deactivated successfully. Jul 1 08:45:13.897971 systemd-logind[1700]: Session 20 logged out. Waiting for processes to exit. Jul 1 08:45:13.899485 systemd-logind[1700]: Removed session 20. Jul 1 08:45:14.003488 systemd[1]: Started sshd@18-10.200.8.13:22-10.200.16.10:36950.service - OpenSSH per-connection server daemon (10.200.16.10:36950). Jul 1 08:45:14.632649 sshd[5940]: Accepted publickey for core from 10.200.16.10 port 36950 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:45:14.635212 sshd-session[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:14.643203 systemd-logind[1700]: New session 21 of user core. Jul 1 08:45:14.648899 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 1 08:45:15.172896 sshd[5945]: Connection closed by 10.200.16.10 port 36950 Jul 1 08:45:15.174514 sshd-session[5940]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:15.177694 systemd-logind[1700]: Session 21 logged out. Waiting for processes to exit. Jul 1 08:45:15.179405 systemd[1]: sshd@18-10.200.8.13:22-10.200.16.10:36950.service: Deactivated successfully. Jul 1 08:45:15.181900 systemd[1]: session-21.scope: Deactivated successfully. Jul 1 08:45:15.184444 systemd-logind[1700]: Removed session 21. Jul 1 08:45:20.283557 systemd[1]: Started sshd@19-10.200.8.13:22-10.200.16.10:60660.service - OpenSSH per-connection server daemon (10.200.16.10:60660). Jul 1 08:45:20.911062 sshd[5965]: Accepted publickey for core from 10.200.16.10 port 60660 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:45:20.912733 sshd-session[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:20.916885 systemd-logind[1700]: New session 22 of user core. Jul 1 08:45:20.921901 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 1 08:45:21.426280 sshd[5970]: Connection closed by 10.200.16.10 port 60660 Jul 1 08:45:21.427907 sshd-session[5965]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:21.430674 systemd-logind[1700]: Session 22 logged out. Waiting for processes to exit. Jul 1 08:45:21.433098 systemd[1]: sshd@19-10.200.8.13:22-10.200.16.10:60660.service: Deactivated successfully. Jul 1 08:45:21.435999 systemd[1]: session-22.scope: Deactivated successfully. Jul 1 08:45:21.439301 systemd-logind[1700]: Removed session 22. Jul 1 08:45:26.471100 containerd[1721]: time="2025-07-01T08:45:26.471054801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e75f946df149813a89e15f6088f01d8f975be547efccff59564f7d7eb54a317\" id:\"568fb09341f3f68bcd66dcb41a0fb0adde0e2d20935719172c899af377262939\" pid:5995 exited_at:{seconds:1751359526 nanos:470774780}" Jul 1 08:45:26.535716 systemd[1]: Started sshd@20-10.200.8.13:22-10.200.16.10:60674.service - OpenSSH per-connection server daemon (10.200.16.10:60674). Jul 1 08:45:27.159828 sshd[6007]: Accepted publickey for core from 10.200.16.10 port 60674 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:45:27.161631 sshd-session[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:27.165375 systemd-logind[1700]: New session 23 of user core. Jul 1 08:45:27.171901 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 1 08:45:27.700052 sshd[6011]: Connection closed by 10.200.16.10 port 60674 Jul 1 08:45:27.701492 sshd-session[6007]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:27.704570 systemd[1]: sshd@20-10.200.8.13:22-10.200.16.10:60674.service: Deactivated successfully. Jul 1 08:45:27.707034 systemd[1]: session-23.scope: Deactivated successfully. Jul 1 08:45:27.709626 systemd-logind[1700]: Session 23 logged out. Waiting for processes to exit. Jul 1 08:45:27.710912 systemd-logind[1700]: Removed session 23. Jul 1 08:45:32.821850 systemd[1]: Started sshd@21-10.200.8.13:22-10.200.16.10:42548.service - OpenSSH per-connection server daemon (10.200.16.10:42548). Jul 1 08:45:33.350103 containerd[1721]: time="2025-07-01T08:45:33.350059516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab\" id:\"c77099cf8f92f93aebf95f8fdf6bb3d4a04ae9714b6f892d5a6fdf96c94cc16a\" pid:6060 exited_at:{seconds:1751359533 nanos:349814599}" Jul 1 08:45:33.452704 sshd[6045]: Accepted publickey for core from 10.200.16.10 port 42548 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:45:33.453650 sshd-session[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:33.457969 systemd-logind[1700]: New session 24 of user core. Jul 1 08:45:33.469051 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 1 08:45:33.959774 sshd[6071]: Connection closed by 10.200.16.10 port 42548 Jul 1 08:45:33.958286 sshd-session[6045]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:33.962694 systemd[1]: sshd@21-10.200.8.13:22-10.200.16.10:42548.service: Deactivated successfully. Jul 1 08:45:33.967437 systemd[1]: session-24.scope: Deactivated successfully. Jul 1 08:45:33.967831 systemd-logind[1700]: Session 24 logged out. Waiting for processes to exit. Jul 1 08:45:33.971384 systemd-logind[1700]: Removed session 24. Jul 1 08:45:35.550155 containerd[1721]: time="2025-07-01T08:45:35.550110161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"026343d5b458123281640a1c315c0fbe18bccfb8dd0d994e1a3729aadff38bab\" id:\"2fc99d0bfa3636176fc5fa1e34d5ac11f367640ae2b73b950a8ad293c5f09cfe\" pid:6095 exited_at:{seconds:1751359535 nanos:549870290}" Jul 1 08:45:39.081099 systemd[1]: Started sshd@22-10.200.8.13:22-10.200.16.10:42556.service - OpenSSH per-connection server daemon (10.200.16.10:42556). Jul 1 08:45:39.718058 sshd[6106]: Accepted publickey for core from 10.200.16.10 port 42556 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:45:39.719076 sshd-session[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:39.723201 systemd-logind[1700]: New session 25 of user core. Jul 1 08:45:39.728907 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 1 08:45:40.213836 sshd[6109]: Connection closed by 10.200.16.10 port 42556 Jul 1 08:45:40.214439 sshd-session[6106]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:40.219156 systemd-logind[1700]: Session 25 logged out. Waiting for processes to exit. Jul 1 08:45:40.220035 systemd[1]: sshd@22-10.200.8.13:22-10.200.16.10:42556.service: Deactivated successfully. Jul 1 08:45:40.223399 systemd[1]: session-25.scope: Deactivated successfully. Jul 1 08:45:40.226523 systemd-logind[1700]: Removed session 25. Jul 1 08:45:40.603182 containerd[1721]: time="2025-07-01T08:45:40.603140503Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6fabd976dd2d180ab44fd01e96b1d226bf6228da667c15d44eff32837344113\" id:\"10df9e983187fec62ef98a25c9a2a7a7c9a84fce6e27cb6ab41436c49718e1bc\" pid:6131 exited_at:{seconds:1751359540 nanos:602625680}" Jul 1 08:45:45.324373 systemd[1]: Started sshd@23-10.200.8.13:22-10.200.16.10:49578.service - OpenSSH per-connection server daemon (10.200.16.10:49578). Jul 1 08:45:45.951845 sshd[6142]: Accepted publickey for core from 10.200.16.10 port 49578 ssh2: RSA SHA256:wcLj5xftcIZrGWPjp5pcf5uRem8et04W08Beck37HCI Jul 1 08:45:45.952866 sshd-session[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:45.956966 systemd-logind[1700]: New session 26 of user core. Jul 1 08:45:45.961925 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 1 08:45:46.437125 sshd[6145]: Connection closed by 10.200.16.10 port 49578 Jul 1 08:45:46.437588 sshd-session[6142]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:46.440908 systemd[1]: sshd@23-10.200.8.13:22-10.200.16.10:49578.service: Deactivated successfully. Jul 1 08:45:46.442466 systemd[1]: session-26.scope: Deactivated successfully. Jul 1 08:45:46.443188 systemd-logind[1700]: Session 26 logged out. Waiting for processes to exit. Jul 1 08:45:46.444239 systemd-logind[1700]: Removed session 26. Jul 1 08:45:49.437774 containerd[1721]: time="2025-07-01T08:45:49.437618304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6fabd976dd2d180ab44fd01e96b1d226bf6228da667c15d44eff32837344113\" id:\"bb63de110f8b01ad286ebf69047b54ad06fe39b3c86a123b5219219f2448fddc\" pid:6169 exited_at:{seconds:1751359549 nanos:437282477}"