Jun 20 19:13:03.036227 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:13:03.036258 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:13:03.036270 kernel: BIOS-provided physical RAM map: Jun 20 19:13:03.036277 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 19:13:03.036284 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 20 19:13:03.036291 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jun 20 19:13:03.036299 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jun 20 19:13:03.036308 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jun 20 19:13:03.036315 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jun 20 19:13:03.036322 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 20 19:13:03.036329 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 20 19:13:03.036336 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 20 19:13:03.036343 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 20 19:13:03.036350 kernel: printk: legacy bootconsole [earlyser0] enabled Jun 20 19:13:03.036361 kernel: NX (Execute Disable) protection: active Jun 20 19:13:03.036369 kernel: APIC: Static calls initialized Jun 20 19:13:03.036376 kernel: efi: EFI v2.7 by Microsoft Jun 20 19:13:03.036384 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eab5518 RNG=0x3ffd2018 Jun 20 19:13:03.036392 kernel: random: crng init done Jun 20 19:13:03.036399 kernel: secureboot: Secure boot disabled Jun 20 19:13:03.036409 kernel: SMBIOS 3.1.0 present. Jun 20 19:13:03.036444 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jun 20 19:13:03.036452 kernel: DMI: Memory slots populated: 2/2 Jun 20 19:13:03.036460 kernel: Hypervisor detected: Microsoft Hyper-V Jun 20 19:13:03.036468 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jun 20 19:13:03.036474 kernel: Hyper-V: Nested features: 0x3e0101 Jun 20 19:13:03.036481 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 20 19:13:03.036488 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 20 19:13:03.036496 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 19:13:03.036502 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 19:13:03.036509 kernel: tsc: Detected 2299.998 MHz processor Jun 20 19:13:03.036516 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:13:03.036526 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:13:03.036536 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jun 20 19:13:03.036544 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 20 19:13:03.036552 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:13:03.036560 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jun 20 19:13:03.036567 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jun 20 19:13:03.036575 kernel: Using GB pages for direct mapping Jun 20 19:13:03.036583 kernel: ACPI: Early table checksum verification disabled Jun 20 19:13:03.036594 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 20 19:13:03.036604 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:03.036612 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:03.036620 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 20 19:13:03.036628 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 20 19:13:03.036636 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:03.036644 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:03.036673 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:03.036681 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 20 19:13:03.036690 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 20 19:13:03.036698 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:03.036707 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 20 19:13:03.036715 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jun 20 19:13:03.036724 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 20 19:13:03.036732 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 20 19:13:03.036740 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 20 19:13:03.036750 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 20 19:13:03.036758 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jun 20 19:13:03.036766 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jun 20 19:13:03.036774 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 20 19:13:03.036782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 20 19:13:03.036790 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jun 20 19:13:03.036798 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jun 20 19:13:03.036807 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jun 20 19:13:03.036815 kernel: Zone ranges: Jun 20 19:13:03.036825 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:13:03.036833 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 19:13:03.036841 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 19:13:03.036849 kernel: Device empty Jun 20 19:13:03.036857 kernel: Movable zone start for each node Jun 20 19:13:03.036865 kernel: Early memory node ranges Jun 20 19:13:03.036873 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 20 19:13:03.036882 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jun 20 19:13:03.036890 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jun 20 19:13:03.036899 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 20 19:13:03.036907 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 19:13:03.036915 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 20 19:13:03.036923 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:13:03.036931 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 20 19:13:03.036939 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jun 20 19:13:03.036947 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jun 20 19:13:03.036955 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 20 19:13:03.036963 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:13:03.036973 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:13:03.036981 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:13:03.036989 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 20 19:13:03.036997 kernel: TSC deadline timer available Jun 20 19:13:03.037005 kernel: CPU topo: Max. logical packages: 1 Jun 20 19:13:03.037013 kernel: CPU topo: Max. logical dies: 1 Jun 20 19:13:03.037021 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:13:03.037029 kernel: CPU topo: Max. threads per core: 2 Jun 20 19:13:03.037037 kernel: CPU topo: Num. cores per package: 1 Jun 20 19:13:03.037047 kernel: CPU topo: Num. threads per package: 2 Jun 20 19:13:03.037055 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 20 19:13:03.037063 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 20 19:13:03.037071 kernel: Booting paravirtualized kernel on Hyper-V Jun 20 19:13:03.037079 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:13:03.037088 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:13:03.037096 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 20 19:13:03.037104 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 20 19:13:03.037112 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:13:03.037122 kernel: Hyper-V: PV spinlocks enabled Jun 20 19:13:03.037130 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:13:03.037139 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:13:03.037148 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:13:03.037156 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 20 19:13:03.037165 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:13:03.037173 kernel: Fallback order for Node 0: 0 Jun 20 19:13:03.037181 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jun 20 19:13:03.037190 kernel: Policy zone: Normal Jun 20 19:13:03.037199 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:13:03.037207 kernel: software IO TLB: area num 2. Jun 20 19:13:03.037215 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:13:03.037223 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:13:03.037231 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:13:03.037239 kernel: Dynamic Preempt: voluntary Jun 20 19:13:03.037247 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:13:03.037256 kernel: rcu: RCU event tracing is enabled. Jun 20 19:13:03.037274 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:13:03.037283 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:13:03.037292 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:13:03.037302 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:13:03.037311 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:13:03.037320 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:13:03.037328 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:13:03.037337 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:13:03.037346 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:13:03.037356 kernel: Using NULL legacy PIC Jun 20 19:13:03.037366 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 20 19:13:03.037375 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:13:03.037384 kernel: Console: colour dummy device 80x25 Jun 20 19:13:03.037393 kernel: printk: legacy console [tty1] enabled Jun 20 19:13:03.037402 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:13:03.037411 kernel: printk: legacy bootconsole [earlyser0] disabled Jun 20 19:13:03.037419 kernel: ACPI: Core revision 20240827 Jun 20 19:13:03.037430 kernel: Failed to register legacy timer interrupt Jun 20 19:13:03.037439 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:13:03.037448 kernel: x2apic enabled Jun 20 19:13:03.037457 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:13:03.037465 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jun 20 19:13:03.037474 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 20 19:13:03.037482 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jun 20 19:13:03.037491 kernel: Hyper-V: Using IPI hypercalls Jun 20 19:13:03.037500 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 20 19:13:03.037510 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 20 19:13:03.037519 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 20 19:13:03.037528 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 20 19:13:03.037537 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 20 19:13:03.037545 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 20 19:13:03.037554 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jun 20 19:13:03.037563 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299998) Jun 20 19:13:03.037572 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 19:13:03.037583 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 20 19:13:03.037591 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 20 19:13:03.037600 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:13:03.037608 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:13:03.037616 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:13:03.037626 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 20 19:13:03.037634 kernel: RETBleed: Vulnerable Jun 20 19:13:03.037643 kernel: Speculative Store Bypass: Vulnerable Jun 20 19:13:03.037660 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 19:13:03.037669 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:13:03.037678 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:13:03.037688 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:13:03.037697 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 20 19:13:03.037705 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 20 19:13:03.037714 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 20 19:13:03.037723 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jun 20 19:13:03.037732 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jun 20 19:13:03.037741 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jun 20 19:13:03.037750 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:13:03.037758 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 20 19:13:03.037767 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 20 19:13:03.037775 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 20 19:13:03.037786 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jun 20 19:13:03.037794 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jun 20 19:13:03.037803 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jun 20 19:13:03.037812 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jun 20 19:13:03.037820 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:13:03.037829 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:13:03.037837 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:13:03.037845 kernel: landlock: Up and running. Jun 20 19:13:03.037854 kernel: SELinux: Initializing. Jun 20 19:13:03.037863 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:13:03.037872 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:13:03.037880 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jun 20 19:13:03.037891 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jun 20 19:13:03.037900 kernel: signal: max sigframe size: 11952 Jun 20 19:13:03.037909 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:13:03.037918 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:13:03.037927 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:13:03.037935 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 19:13:03.037944 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:13:03.037953 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:13:03.037961 kernel: .... node #0, CPUs: #1 Jun 20 19:13:03.037972 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:13:03.037981 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jun 20 19:13:03.037990 kernel: Memory: 8077024K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 299988K reserved, 0K cma-reserved) Jun 20 19:13:03.037999 kernel: devtmpfs: initialized Jun 20 19:13:03.038008 kernel: x86/mm: Memory block size: 128MB Jun 20 19:13:03.038017 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 20 19:13:03.038026 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:13:03.038035 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:13:03.038043 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:13:03.038054 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:13:03.038063 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:13:03.038071 kernel: audit: type=2000 audit(1750446779.029:1): state=initialized audit_enabled=0 res=1 Jun 20 19:13:03.038080 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:13:03.038090 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:13:03.038098 kernel: cpuidle: using governor menu Jun 20 19:13:03.038107 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:13:03.038116 kernel: dca service started, version 1.12.1 Jun 20 19:13:03.038125 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jun 20 19:13:03.038135 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jun 20 19:13:03.038144 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:13:03.038152 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:13:03.038161 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:13:03.038170 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:13:03.038179 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:13:03.038188 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:13:03.038196 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:13:03.038207 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:13:03.038215 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:13:03.038224 kernel: ACPI: Interpreter enabled Jun 20 19:13:03.038233 kernel: ACPI: PM: (supports S0 S5) Jun 20 19:13:03.038242 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:13:03.038251 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:13:03.038260 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 20 19:13:03.038268 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 20 19:13:03.038277 kernel: iommu: Default domain type: Translated Jun 20 19:13:03.038286 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:13:03.038296 kernel: efivars: Registered efivars operations Jun 20 19:13:03.038305 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:13:03.038314 kernel: PCI: System does not support PCI Jun 20 19:13:03.038323 kernel: vgaarb: loaded Jun 20 19:13:03.038332 kernel: clocksource: Switched to clocksource tsc-early Jun 20 19:13:03.038341 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:13:03.038349 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:13:03.038359 kernel: pnp: PnP ACPI init Jun 20 19:13:03.038367 kernel: pnp: PnP ACPI: found 3 devices Jun 20 19:13:03.038377 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:13:03.038386 kernel: NET: Registered PF_INET protocol family Jun 20 19:13:03.038395 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:13:03.038404 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 20 19:13:03.038413 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:13:03.038421 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:13:03.038430 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 20 19:13:03.038439 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 20 19:13:03.038450 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:13:03.038458 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:13:03.038468 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:13:03.038476 kernel: NET: Registered PF_XDP protocol family Jun 20 19:13:03.038485 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:13:03.038494 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 19:13:03.038503 kernel: software IO TLB: mapped [mem 0x000000003a9c6000-0x000000003e9c6000] (64MB) Jun 20 19:13:03.038512 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jun 20 19:13:03.038521 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jun 20 19:13:03.038531 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jun 20 19:13:03.038540 kernel: clocksource: Switched to clocksource tsc Jun 20 19:13:03.038549 kernel: Initialise system trusted keyrings Jun 20 19:13:03.038558 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 20 19:13:03.038567 kernel: Key type asymmetric registered Jun 20 19:13:03.038575 kernel: Asymmetric key parser 'x509' registered Jun 20 19:13:03.038584 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:13:03.038592 kernel: io scheduler mq-deadline registered Jun 20 19:13:03.038601 kernel: io scheduler kyber registered Jun 20 19:13:03.038611 kernel: io scheduler bfq registered Jun 20 19:13:03.038620 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:13:03.038629 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:13:03.038638 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:13:03.038655 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 20 19:13:03.038665 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:13:03.038674 kernel: i8042: PNP: No PS/2 controller found. Jun 20 19:13:03.038812 kernel: rtc_cmos 00:02: registered as rtc0 Jun 20 19:13:03.038890 kernel: rtc_cmos 00:02: setting system clock to 2025-06-20T19:13:02 UTC (1750446782) Jun 20 19:13:03.039441 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 20 19:13:03.039466 kernel: intel_pstate: Intel P-state driver initializing Jun 20 19:13:03.039475 kernel: efifb: probing for efifb Jun 20 19:13:03.039484 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 20 19:13:03.039493 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 20 19:13:03.039503 kernel: efifb: scrolling: redraw Jun 20 19:13:03.039512 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 19:13:03.039521 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 19:13:03.039533 kernel: fb0: EFI VGA frame buffer device Jun 20 19:13:03.039541 kernel: pstore: Using crash dump compression: deflate Jun 20 19:13:03.039550 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 19:13:03.039559 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:13:03.039568 kernel: Segment Routing with IPv6 Jun 20 19:13:03.039578 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:13:03.039586 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:13:03.039596 kernel: Key type dns_resolver registered Jun 20 19:13:03.039605 kernel: IPI shorthand broadcast: enabled Jun 20 19:13:03.039617 kernel: sched_clock: Marking stable (3214004079, 101513012)->(3648604311, -333087220) Jun 20 19:13:03.039626 kernel: registered taskstats version 1 Jun 20 19:13:03.039635 kernel: Loading compiled-in X.509 certificates Jun 20 19:13:03.039644 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:13:03.039674 kernel: Demotion targets for Node 0: null Jun 20 19:13:03.039684 kernel: Key type .fscrypt registered Jun 20 19:13:03.039692 kernel: Key type fscrypt-provisioning registered Jun 20 19:13:03.039701 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:13:03.039711 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:13:03.039722 kernel: ima: No architecture policies found Jun 20 19:13:03.039731 kernel: clk: Disabling unused clocks Jun 20 19:13:03.039740 kernel: Warning: unable to open an initial console. Jun 20 19:13:03.039749 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:13:03.039758 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:13:03.039767 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:13:03.039776 kernel: Run /init as init process Jun 20 19:13:03.039785 kernel: with arguments: Jun 20 19:13:03.039795 kernel: /init Jun 20 19:13:03.039805 kernel: with environment: Jun 20 19:13:03.039813 kernel: HOME=/ Jun 20 19:13:03.039822 kernel: TERM=linux Jun 20 19:13:03.039831 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:13:03.039842 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:13:03.039855 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:13:03.039866 systemd[1]: Detected virtualization microsoft. Jun 20 19:13:03.039877 systemd[1]: Detected architecture x86-64. Jun 20 19:13:03.039886 systemd[1]: Running in initrd. Jun 20 19:13:03.039895 systemd[1]: No hostname configured, using default hostname. Jun 20 19:13:03.039905 systemd[1]: Hostname set to . Jun 20 19:13:03.039915 systemd[1]: Initializing machine ID from random generator. Jun 20 19:13:03.039924 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:13:03.039934 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:13:03.039943 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:13:03.039956 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:13:03.039965 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:13:03.039975 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:13:03.039985 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:13:03.039996 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:13:03.040006 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:13:03.040015 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:13:03.040026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:13:03.040036 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:13:03.040045 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:13:03.040055 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:13:03.040065 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:13:03.040074 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:13:03.040084 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:13:03.040093 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:13:03.040102 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:13:03.040113 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:13:03.040123 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:13:03.040132 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:13:03.040142 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:13:03.040151 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:13:03.040161 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:13:03.040170 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:13:03.040180 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:13:03.040191 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:13:03.040201 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:13:03.040210 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:13:03.040229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:03.040241 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:13:03.040253 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:13:03.040281 systemd-journald[205]: Collecting audit messages is disabled. Jun 20 19:13:03.040309 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:13:03.040320 systemd-journald[205]: Journal started Jun 20 19:13:03.040345 systemd-journald[205]: Runtime Journal (/run/log/journal/5617b16dea5f49a49f23579d8639968f) is 8M, max 158.9M, 150.9M free. Jun 20 19:13:03.048670 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:13:03.049155 systemd-modules-load[206]: Inserted module 'overlay' Jun 20 19:13:03.053628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:03.059763 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:13:03.064536 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:13:03.072166 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:13:03.085248 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:13:03.089463 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:13:03.091998 systemd-tmpfiles[222]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:13:03.101164 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:13:03.111930 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:13:03.115139 systemd-modules-load[206]: Inserted module 'br_netfilter' Jun 20 19:13:03.116195 kernel: Bridge firewalling registered Jun 20 19:13:03.117430 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:13:03.119781 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:13:03.125744 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:13:03.133755 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:13:03.136953 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:13:03.146758 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:13:03.150150 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:13:03.164007 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:13:03.200460 systemd-resolved[247]: Positive Trust Anchors: Jun 20 19:13:03.200474 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:13:03.200512 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:13:03.222238 systemd-resolved[247]: Defaulting to hostname 'linux'. Jun 20 19:13:03.225240 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:13:03.228183 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:13:03.243669 kernel: SCSI subsystem initialized Jun 20 19:13:03.251664 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:13:03.261666 kernel: iscsi: registered transport (tcp) Jun 20 19:13:03.279844 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:13:03.279910 kernel: QLogic iSCSI HBA Driver Jun 20 19:13:03.294842 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:13:03.308520 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:13:03.309379 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:13:03.343609 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:13:03.345777 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:13:03.399676 kernel: raid6: avx512x4 gen() 43337 MB/s Jun 20 19:13:03.417660 kernel: raid6: avx512x2 gen() 41141 MB/s Jun 20 19:13:03.436661 kernel: raid6: avx512x1 gen() 25111 MB/s Jun 20 19:13:03.454659 kernel: raid6: avx2x4 gen() 34683 MB/s Jun 20 19:13:03.471659 kernel: raid6: avx2x2 gen() 36187 MB/s Jun 20 19:13:03.491032 kernel: raid6: avx2x1 gen() 26633 MB/s Jun 20 19:13:03.491053 kernel: raid6: using algorithm avx512x4 gen() 43337 MB/s Jun 20 19:13:03.509667 kernel: raid6: .... xor() 6910 MB/s, rmw enabled Jun 20 19:13:03.509689 kernel: raid6: using avx512x2 recovery algorithm Jun 20 19:13:03.528678 kernel: xor: automatically using best checksumming function avx Jun 20 19:13:03.650676 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:13:03.656059 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:13:03.659405 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:13:03.686050 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jun 20 19:13:03.690882 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:13:03.694123 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:13:03.716787 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation Jun 20 19:13:03.736231 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:13:03.737912 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:13:03.776703 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:13:03.782640 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:13:03.824671 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:13:03.833668 kernel: AES CTR mode by8 optimization enabled Jun 20 19:13:03.867775 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:13:03.870113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:03.875134 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:03.879712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:03.891233 kernel: hv_vmbus: Vmbus version:5.3 Jun 20 19:13:03.892364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:13:03.892459 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:03.903725 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 19:13:03.903753 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 19:13:03.900485 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:03.913678 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 20 19:13:03.919431 kernel: hv_vmbus: registering driver hv_pci Jun 20 19:13:03.919670 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 20 19:13:03.919747 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 19:13:03.928661 kernel: PTP clock support registered Jun 20 19:13:03.930674 kernel: hv_vmbus: registering driver hv_netvsc Jun 20 19:13:03.933679 kernel: hv_vmbus: registering driver hid_hyperv Jun 20 19:13:03.938670 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jun 20 19:13:03.945571 kernel: hv_utils: Registering HyperV Utility Driver Jun 20 19:13:03.945686 kernel: hv_vmbus: registering driver hv_utils Jun 20 19:13:03.945707 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 20 19:13:03.945719 kernel: hv_utils: Shutdown IC version 3.2 Jun 20 19:13:03.949793 kernel: hv_utils: Heartbeat IC version 3.0 Jun 20 19:13:03.949889 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 20 19:13:04.012382 kernel: hv_utils: TimeSync IC version 4.0 Jun 20 19:13:04.011802 systemd-resolved[247]: Clock change detected. Flushing caches. Jun 20 19:13:04.017741 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jun 20 19:13:04.022252 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jun 20 19:13:04.022474 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 19:13:04.024722 kernel: hv_vmbus: registering driver hv_storvsc Jun 20 19:13:04.030551 kernel: scsi host0: storvsc_host_t Jun 20 19:13:04.030782 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jun 20 19:13:04.030811 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jun 20 19:13:04.038744 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jun 20 19:13:04.040182 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:04.041576 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdc13207 (unnamed net_device) (uninitialized): VF slot 1 added Jun 20 19:13:04.058807 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jun 20 19:13:04.062720 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 19:13:04.063015 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jun 20 19:13:04.076545 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 20 19:13:04.076778 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:13:04.078747 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 20 19:13:04.083709 kernel: nvme nvme0: pci function c05b:00:00.0 Jun 20 19:13:04.083901 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jun 20 19:13:04.094930 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#227 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:13:04.111864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#200 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:13:04.351778 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 20 19:13:04.357918 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:13:04.763725 kernel: nvme nvme0: using unchecked data buffer Jun 20 19:13:04.966712 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jun 20 19:13:05.001153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 20 19:13:05.037146 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 20 19:13:05.038739 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 20 19:13:05.040840 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:13:05.069351 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jun 20 19:13:05.071724 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:13:05.071743 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jun 20 19:13:05.068552 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jun 20 19:13:05.078725 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jun 20 19:13:05.083828 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 19:13:05.088986 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jun 20 19:13:05.093809 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jun 20 19:13:05.100461 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jun 20 19:13:05.100508 kernel: pci 7870:00:00.0: enabling Extended Tags Jun 20 19:13:05.117645 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 19:13:05.117864 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jun 20 19:13:05.122887 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jun 20 19:13:05.128304 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jun 20 19:13:05.138741 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jun 20 19:13:05.143750 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdc13207 eth0: VF registering: eth1 Jun 20 19:13:05.143940 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jun 20 19:13:05.164715 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jun 20 19:13:05.534995 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:13:05.538528 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:13:05.544025 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:13:05.544446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:13:05.546836 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:13:05.561796 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:13:06.089321 disk-uuid[667]: The operation has completed successfully. Jun 20 19:13:06.092848 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:13:06.137494 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:13:06.137587 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:13:06.179519 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:13:06.198109 sh[721]: Success Jun 20 19:13:06.228727 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:13:06.228931 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:13:06.230249 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:13:06.239716 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 20 19:13:06.488190 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:13:06.495797 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:13:06.512013 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:13:06.523714 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:13:06.527277 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (734) Jun 20 19:13:06.527328 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:13:06.528947 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:13:06.528963 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:13:06.844405 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:13:06.849196 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:13:06.851018 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:13:06.852962 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:13:06.859906 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:13:06.891724 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (767) Jun 20 19:13:06.894986 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:06.895166 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:13:06.895183 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:13:06.932742 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:06.934215 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:13:06.940865 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:13:06.946674 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:13:06.952542 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:13:06.984821 systemd-networkd[903]: lo: Link UP Jun 20 19:13:06.984830 systemd-networkd[903]: lo: Gained carrier Jun 20 19:13:06.986600 systemd-networkd[903]: Enumeration completed Jun 20 19:13:06.991408 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 20 19:13:06.986998 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:06.987002 systemd-networkd[903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:13:07.001125 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:13:07.001322 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdc13207 eth0: Data path switched to VF: enP30832s1 Jun 20 19:13:06.987323 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:13:06.992402 systemd[1]: Reached target network.target - Network. Jun 20 19:13:07.001390 systemd-networkd[903]: enP30832s1: Link UP Jun 20 19:13:07.001563 systemd-networkd[903]: eth0: Link UP Jun 20 19:13:07.001753 systemd-networkd[903]: eth0: Gained carrier Jun 20 19:13:07.001764 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:07.008244 systemd-networkd[903]: enP30832s1: Gained carrier Jun 20 19:13:07.018735 systemd-networkd[903]: eth0: DHCPv4 address 10.200.4.4/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:13:07.848557 ignition[898]: Ignition 2.21.0 Jun 20 19:13:07.848571 ignition[898]: Stage: fetch-offline Jun 20 19:13:07.848671 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:07.851143 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:13:07.848678 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:07.855852 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:13:07.848790 ignition[898]: parsed url from cmdline: "" Jun 20 19:13:07.848793 ignition[898]: no config URL provided Jun 20 19:13:07.848798 ignition[898]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:13:07.848804 ignition[898]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:13:07.848808 ignition[898]: failed to fetch config: resource requires networking Jun 20 19:13:07.849225 ignition[898]: Ignition finished successfully Jun 20 19:13:07.880206 ignition[914]: Ignition 2.21.0 Jun 20 19:13:07.880255 ignition[914]: Stage: fetch Jun 20 19:13:07.880572 ignition[914]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:07.880582 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:07.880680 ignition[914]: parsed url from cmdline: "" Jun 20 19:13:07.880684 ignition[914]: no config URL provided Jun 20 19:13:07.880689 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:13:07.880716 ignition[914]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:13:07.880745 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 20 19:13:07.962220 ignition[914]: GET result: OK Jun 20 19:13:07.962311 ignition[914]: config has been read from IMDS userdata Jun 20 19:13:07.962342 ignition[914]: parsing config with SHA512: a94dcf71ad12c6d975cc46be517b0ac9b793f58af6f2756621e53f424525902dfa753ece256fd41cb048287fc06c832ef019bc8368070061a522387cca3c2c7e Jun 20 19:13:07.966455 unknown[914]: fetched base config from "system" Jun 20 19:13:07.966465 unknown[914]: fetched base config from "system" Jun 20 19:13:07.966824 ignition[914]: fetch: fetch complete Jun 20 19:13:07.966470 unknown[914]: fetched user config from "azure" Jun 20 19:13:07.966830 ignition[914]: fetch: fetch passed Jun 20 19:13:07.969392 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:13:07.966870 ignition[914]: Ignition finished successfully Jun 20 19:13:07.974796 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:13:08.000323 ignition[921]: Ignition 2.21.0 Jun 20 19:13:08.000334 ignition[921]: Stage: kargs Jun 20 19:13:08.000541 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:08.000550 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:08.005764 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:13:08.003151 ignition[921]: kargs: kargs passed Jun 20 19:13:08.011835 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:13:08.003208 ignition[921]: Ignition finished successfully Jun 20 19:13:08.029645 ignition[928]: Ignition 2.21.0 Jun 20 19:13:08.029655 ignition[928]: Stage: disks Jun 20 19:13:08.031988 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:13:08.029891 ignition[928]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:08.036936 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:13:08.029899 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:08.039089 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:13:08.030801 ignition[928]: disks: disks passed Jun 20 19:13:08.041388 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:13:08.030843 ignition[928]: Ignition finished successfully Jun 20 19:13:08.047080 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:13:08.050812 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:13:08.055421 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:13:08.069185 systemd-networkd[903]: enP30832s1: Gained IPv6LL Jun 20 19:13:08.130082 systemd-fsck[936]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jun 20 19:13:08.134755 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:13:08.140651 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:13:08.411727 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:13:08.412873 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:13:08.417143 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:13:08.436197 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:13:08.440808 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:13:08.458331 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 19:13:08.463886 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:13:08.479053 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (945) Jun 20 19:13:08.479082 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:08.479094 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:13:08.479106 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:13:08.463926 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:13:08.469755 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:13:08.479813 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:13:08.488199 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:13:08.900855 systemd-networkd[903]: eth0: Gained IPv6LL Jun 20 19:13:08.982015 coreos-metadata[947]: Jun 20 19:13:08.981 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 19:13:08.986733 coreos-metadata[947]: Jun 20 19:13:08.986 INFO Fetch successful Jun 20 19:13:08.988342 coreos-metadata[947]: Jun 20 19:13:08.986 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 20 19:13:09.001045 coreos-metadata[947]: Jun 20 19:13:09.001 INFO Fetch successful Jun 20 19:13:09.002497 coreos-metadata[947]: Jun 20 19:13:09.002 INFO wrote hostname ci-4344.1.0-a-69d2cbc98d to /sysroot/etc/hostname Jun 20 19:13:09.006594 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:13:09.127962 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:13:09.163543 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:13:09.184394 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:13:09.188847 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:13:10.031787 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:13:10.035780 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:13:10.039850 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:13:10.051669 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:13:10.057834 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:10.076103 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:13:10.081073 ignition[1063]: INFO : Ignition 2.21.0 Jun 20 19:13:10.081073 ignition[1063]: INFO : Stage: mount Jun 20 19:13:10.081073 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:10.081073 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:10.081073 ignition[1063]: INFO : mount: mount passed Jun 20 19:13:10.081073 ignition[1063]: INFO : Ignition finished successfully Jun 20 19:13:10.079791 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:13:10.083964 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:13:10.100888 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:13:10.115720 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (1076) Jun 20 19:13:10.117923 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:10.117964 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:13:10.118895 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:13:10.123317 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:13:10.152204 ignition[1092]: INFO : Ignition 2.21.0 Jun 20 19:13:10.152204 ignition[1092]: INFO : Stage: files Jun 20 19:13:10.155834 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:10.155834 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:10.155834 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:13:10.173663 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:13:10.173663 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:13:10.238534 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:13:10.242823 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:13:10.242823 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:13:10.242823 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:13:10.242823 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 20 19:13:10.238950 unknown[1092]: wrote ssh authorized keys file for user: core Jun 20 19:13:10.299888 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:13:10.458318 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:13:10.464835 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:13:10.464835 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:13:10.464835 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:13:10.464835 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:13:10.464835 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:13:10.464835 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:13:10.464835 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:13:10.464835 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:13:10.495701 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:13:10.498346 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:13:10.498346 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:13:10.506399 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:13:10.506399 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:13:10.506399 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 20 19:13:11.367421 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 19:13:11.576209 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:13:11.576209 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 20 19:13:11.613885 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:13:11.623434 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:13:11.623434 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 20 19:13:11.623434 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:13:11.634666 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:13:11.634666 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:13:11.634666 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:13:11.634666 ignition[1092]: INFO : files: files passed Jun 20 19:13:11.634666 ignition[1092]: INFO : Ignition finished successfully Jun 20 19:13:11.629185 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:13:11.637880 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:13:11.655570 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:13:11.659732 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:13:11.662295 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:13:11.687466 initrd-setup-root-after-ignition[1122]: grep: Jun 20 19:13:11.689207 initrd-setup-root-after-ignition[1126]: grep: Jun 20 19:13:11.689207 initrd-setup-root-after-ignition[1122]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:13:11.689207 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:13:11.691127 initrd-setup-root-after-ignition[1126]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:13:11.690338 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:13:11.701217 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:13:11.705821 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:13:11.761218 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:13:11.761316 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:13:11.762233 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:13:11.767946 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:13:11.772815 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:13:11.773650 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:13:11.796691 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:13:11.798784 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:13:11.819839 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:13:11.820328 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:13:11.824900 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:13:11.829164 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:13:11.829305 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:13:11.838050 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:13:11.840865 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:13:11.844717 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:13:11.847864 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:13:11.852861 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:13:11.856146 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:13:11.859172 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:13:11.863970 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:13:11.867324 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:13:11.872858 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:13:11.875960 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:13:11.879821 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:13:11.879994 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:13:11.881079 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:13:11.881457 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:13:11.887505 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:13:11.888419 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:13:11.888526 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:13:11.888671 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:13:11.889288 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:13:11.889410 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:13:11.889860 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:13:11.889979 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:13:11.890247 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 19:13:11.890353 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:13:11.960770 ignition[1146]: INFO : Ignition 2.21.0 Jun 20 19:13:11.960770 ignition[1146]: INFO : Stage: umount Jun 20 19:13:11.960770 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:11.960770 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:11.960770 ignition[1146]: INFO : umount: umount passed Jun 20 19:13:11.960770 ignition[1146]: INFO : Ignition finished successfully Jun 20 19:13:11.892805 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:13:11.892866 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:13:11.893012 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:13:11.894785 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:13:11.895113 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:13:11.896824 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:13:11.897266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:13:11.897381 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:13:11.917326 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:13:11.922858 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:13:11.948677 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:13:11.948816 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:13:11.950072 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:13:11.950113 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:13:11.950326 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:13:11.950356 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:13:11.950636 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:13:11.950663 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:13:11.950743 systemd[1]: Stopped target network.target - Network. Jun 20 19:13:11.951245 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:13:11.951278 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:13:11.951318 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:13:11.951564 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:13:11.956671 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:13:11.963501 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:13:11.969300 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:13:11.969563 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:13:11.969599 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:13:11.970113 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:13:11.970139 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:13:11.970177 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:13:11.970221 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:13:11.970446 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:13:11.970474 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:13:11.970918 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:13:11.971097 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:13:11.979067 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:13:11.979182 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:13:12.010186 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:13:12.010440 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:13:12.010534 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:13:12.080832 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdc13207 eth0: Data path switched from VF: enP30832s1 Jun 20 19:13:12.015215 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:13:12.016066 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:13:12.019581 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:13:12.084982 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:13:12.019620 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:13:12.026414 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:13:12.037124 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:13:12.038720 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:13:12.039287 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:13:12.042429 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:13:12.048980 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:13:12.049032 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:13:12.049194 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:13:12.049222 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:13:12.050437 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:13:12.060747 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:13:12.060807 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:13:12.068160 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:13:12.068313 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:13:12.070791 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:13:12.070884 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:13:12.078492 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:13:12.078521 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:13:12.086845 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:13:12.086901 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:13:12.087226 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:13:12.087263 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:13:12.087555 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:13:12.087588 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:13:12.089823 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:13:12.120769 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:13:12.120869 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:13:12.127609 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:13:12.127664 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:13:12.137537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:13:12.137585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:12.147509 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:13:12.147597 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:13:12.147633 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:13:12.147668 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:13:12.148259 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:13:12.148614 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:13:12.149354 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:13:12.149445 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:13:12.535179 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:13:12.535294 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:13:12.540033 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:13:12.540067 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:13:12.540116 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:13:12.541128 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:13:12.559668 systemd[1]: Switching root. Jun 20 19:13:12.622397 systemd-journald[205]: Journal stopped Jun 20 19:13:19.428583 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jun 20 19:13:19.428629 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:13:19.428642 kernel: SELinux: policy capability open_perms=1 Jun 20 19:13:19.428652 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:13:19.428660 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:13:19.428669 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:13:19.428681 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:13:19.428690 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:13:19.428842 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:13:19.428853 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:13:19.428862 kernel: audit: type=1403 audit(1750446793.889:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:13:19.428874 systemd[1]: Successfully loaded SELinux policy in 132.341ms. Jun 20 19:13:19.428885 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.301ms. Jun 20 19:13:19.428900 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:13:19.428911 systemd[1]: Detected virtualization microsoft. Jun 20 19:13:19.428921 systemd[1]: Detected architecture x86-64. Jun 20 19:13:19.428931 systemd[1]: Detected first boot. Jun 20 19:13:19.428941 systemd[1]: Hostname set to . Jun 20 19:13:19.428954 systemd[1]: Initializing machine ID from random generator. Jun 20 19:13:19.428964 zram_generator::config[1189]: No configuration found. Jun 20 19:13:19.428975 kernel: Guest personality initialized and is inactive Jun 20 19:13:19.428984 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jun 20 19:13:19.428994 kernel: Initialized host personality Jun 20 19:13:19.429003 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:13:19.429012 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:13:19.429026 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:13:19.429037 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:13:19.429047 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:13:19.429057 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:13:19.429068 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:13:19.429079 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:13:19.429088 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:13:19.429100 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:13:19.429110 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:13:19.429121 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:13:19.429132 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:13:19.429142 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:13:19.429152 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:13:19.429163 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:13:19.429173 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:13:19.429186 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:13:19.429198 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:13:19.429209 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:13:19.429220 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:13:19.429231 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:13:19.429242 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:13:19.429252 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:13:19.429262 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:13:19.429274 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:13:19.429284 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:13:19.429295 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:13:19.429305 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:13:19.429316 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:13:19.429326 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:13:19.429337 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:13:19.429347 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:13:19.429360 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:13:19.429371 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:13:19.429381 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:13:19.429391 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:13:19.429402 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:13:19.429415 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:13:19.429425 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:13:19.429437 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:13:19.429447 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:19.429458 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:13:19.429468 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:13:19.429478 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:13:19.429489 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:13:19.429502 systemd[1]: Reached target machines.target - Containers. Jun 20 19:13:19.429513 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:13:19.429524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:13:19.429535 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:13:19.429546 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:13:19.429556 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:13:19.429567 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:13:19.429577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:13:19.429588 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:13:19.429600 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:13:19.429611 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:13:19.429622 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:13:19.429633 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:13:19.429643 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:13:19.429654 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:13:19.429664 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:13:19.429675 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:13:19.429687 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:13:19.429709 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:13:19.429721 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:13:19.429731 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:13:19.429742 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:13:19.429752 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:13:19.429763 systemd[1]: Stopped verity-setup.service. Jun 20 19:13:19.429774 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:19.429787 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:13:19.429797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:13:19.429807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:13:19.429818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:13:19.429828 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:13:19.429838 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:13:19.429849 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:13:19.429860 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:13:19.429871 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:13:19.429883 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:13:19.429919 systemd-journald[1272]: Collecting audit messages is disabled. Jun 20 19:13:19.429945 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:13:19.429956 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:13:19.429969 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:13:19.429981 systemd-journald[1272]: Journal started Jun 20 19:13:19.430007 systemd-journald[1272]: Runtime Journal (/run/log/journal/f0d1ba6454af4813a3bd6f9d64296d61) is 8M, max 158.9M, 150.9M free. Jun 20 19:13:18.780366 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:13:18.789379 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 20 19:13:18.789755 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:13:19.436376 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:13:19.439383 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:13:19.458017 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:13:19.458202 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:13:19.461239 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:13:19.463358 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:13:19.463385 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:13:19.466355 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:13:19.475822 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:13:19.478872 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:13:19.482867 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:13:19.489470 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:13:19.491728 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:13:19.498558 kernel: loop: module loaded Jun 20 19:13:19.498896 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:13:19.505085 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:13:19.509555 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:13:19.512236 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:13:19.513975 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:13:19.517215 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:13:19.524156 kernel: fuse: init (API version 7.41) Jun 20 19:13:19.523603 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:13:19.523994 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:13:19.524389 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:13:19.542224 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:13:19.602724 systemd-journald[1272]: Time spent on flushing to /var/log/journal/f0d1ba6454af4813a3bd6f9d64296d61 is 29.904ms for 978 entries. Jun 20 19:13:19.602724 systemd-journald[1272]: System Journal (/var/log/journal/f0d1ba6454af4813a3bd6f9d64296d61) is 8M, max 2.6G, 2.6G free. Jun 20 19:13:19.835763 systemd-journald[1272]: Received client request to flush runtime journal. Jun 20 19:13:19.835811 kernel: loop0: detected capacity change from 0 to 113872 Jun 20 19:13:19.797825 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:13:19.804810 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:13:19.809842 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:13:19.814306 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:13:19.818398 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:13:19.823053 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:13:19.826465 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:13:19.831518 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:13:19.839899 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:13:19.973731 kernel: ACPI: bus type drm_connector registered Jun 20 19:13:19.974718 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:13:19.974889 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:13:20.138531 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:13:20.142986 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:13:20.715009 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:13:20.715724 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:13:21.593727 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:13:21.640989 kernel: loop1: detected capacity change from 0 to 28496 Jun 20 19:13:21.642979 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:13:21.646125 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:13:22.049665 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Jun 20 19:13:22.049683 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Jun 20 19:13:22.054950 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:13:23.700722 kernel: loop2: detected capacity change from 0 to 146240 Jun 20 19:13:24.114877 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:13:24.118282 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:13:24.148118 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jun 20 19:13:24.944626 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:13:24.950848 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:13:24.994818 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:13:25.050806 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#51 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:13:25.206718 kernel: hv_vmbus: registering driver hyperv_fb Jun 20 19:13:25.217749 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 20 19:13:25.218083 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 20 19:13:25.219836 kernel: Console: switching to colour dummy device 80x25 Jun 20 19:13:25.224262 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 19:13:25.246774 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:13:25.250729 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:13:25.250802 kernel: hv_vmbus: registering driver hv_balloon Jun 20 19:13:25.255805 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 20 19:13:25.322734 kernel: loop3: detected capacity change from 0 to 224512 Jun 20 19:13:25.502201 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:13:25.587358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:25.601360 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:13:25.602358 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:25.614775 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:13:25.617781 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:25.866732 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 20 19:13:25.958266 systemd-networkd[1359]: lo: Link UP Jun 20 19:13:25.958286 systemd-networkd[1359]: lo: Gained carrier Jun 20 19:13:25.959939 systemd-networkd[1359]: Enumeration completed Jun 20 19:13:25.960047 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:13:25.962859 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:25.962947 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:13:25.963517 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:13:25.967720 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 20 19:13:25.968535 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:13:25.974779 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:13:25.975047 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdc13207 eth0: Data path switched to VF: enP30832s1 Jun 20 19:13:25.978132 systemd-networkd[1359]: enP30832s1: Link UP Jun 20 19:13:25.978205 systemd-networkd[1359]: eth0: Link UP Jun 20 19:13:25.978209 systemd-networkd[1359]: eth0: Gained carrier Jun 20 19:13:25.978225 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:25.988071 systemd-networkd[1359]: enP30832s1: Gained carrier Jun 20 19:13:25.996752 systemd-networkd[1359]: eth0: DHCPv4 address 10.200.4.4/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:13:26.149643 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:13:26.200406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 20 19:13:26.203168 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:13:26.399281 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:13:26.539717 kernel: loop4: detected capacity change from 0 to 113872 Jun 20 19:13:26.553722 kernel: loop5: detected capacity change from 0 to 28496 Jun 20 19:13:26.568745 kernel: loop6: detected capacity change from 0 to 146240 Jun 20 19:13:26.585716 kernel: loop7: detected capacity change from 0 to 224512 Jun 20 19:13:27.076921 systemd-networkd[1359]: enP30832s1: Gained IPv6LL Jun 20 19:13:27.268972 systemd-networkd[1359]: eth0: Gained IPv6LL Jun 20 19:13:27.274427 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:13:27.379274 (sd-merge)[1452]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 20 19:13:27.379895 (sd-merge)[1452]: Merged extensions into '/usr'. Jun 20 19:13:27.427589 systemd[1]: Reload requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:13:27.427608 systemd[1]: Reloading... Jun 20 19:13:27.470926 zram_generator::config[1483]: No configuration found. Jun 20 19:13:27.595545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:13:27.696648 systemd[1]: Reloading finished in 268 ms. Jun 20 19:13:27.726888 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:13:27.731089 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:27.741841 systemd[1]: Starting ensure-sysext.service... Jun 20 19:13:27.744913 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:13:27.764813 systemd[1]: Reload requested from client PID 1543 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:13:27.764832 systemd[1]: Reloading... Jun 20 19:13:27.770119 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:13:27.770144 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:13:27.770349 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:13:27.770556 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:13:27.771639 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:13:27.771908 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Jun 20 19:13:27.771958 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Jun 20 19:13:27.819725 zram_generator::config[1574]: No configuration found. Jun 20 19:13:27.900247 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:13:27.900259 systemd-tmpfiles[1544]: Skipping /boot Jun 20 19:13:27.909445 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:13:27.909564 systemd-tmpfiles[1544]: Skipping /boot Jun 20 19:13:27.915653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:13:28.014991 systemd[1]: Reloading finished in 249 ms. Jun 20 19:13:28.046844 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:13:28.055763 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:13:28.069408 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:13:28.080195 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:13:28.092811 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:13:28.097912 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:13:28.102605 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:28.103308 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:13:28.116981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:13:28.122620 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:13:28.128837 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:13:28.130780 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:13:28.130908 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:13:28.131005 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:28.139043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:13:28.140746 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:13:28.143481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:13:28.143647 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:13:28.152092 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:28.152303 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:13:28.155394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:13:28.159921 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:13:28.162046 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:13:28.162170 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:13:28.162278 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:28.163179 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:13:28.167221 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:13:28.167391 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:13:28.175890 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:28.176367 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:13:28.180008 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:13:28.184903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:13:28.187471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:13:28.187583 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:13:28.187759 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:13:28.190101 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:28.191239 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:13:28.191405 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:13:28.195168 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:13:28.199908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:13:28.202757 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:13:28.202916 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:13:28.207170 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:13:28.207310 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:13:28.210850 systemd[1]: Finished ensure-sysext.service. Jun 20 19:13:28.216279 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:13:28.216338 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:13:28.407783 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:13:28.440223 systemd-resolved[1637]: Positive Trust Anchors: Jun 20 19:13:28.440237 systemd-resolved[1637]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:13:28.440269 systemd-resolved[1637]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:13:28.444599 systemd-resolved[1637]: Using system hostname 'ci-4344.1.0-a-69d2cbc98d'. Jun 20 19:13:28.446364 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:13:28.448399 systemd[1]: Reached target network.target - Network. Jun 20 19:13:28.451808 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:13:28.454755 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:13:29.049334 augenrules[1676]: No rules Jun 20 19:13:29.050497 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:13:29.050732 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:13:30.296013 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:13:30.299944 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:13:32.556955 ldconfig[1301]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:13:32.569574 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:13:32.572968 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:13:32.602629 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:13:32.606033 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:13:32.607651 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:13:32.610761 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:13:32.612586 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:13:32.614479 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:13:32.616022 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:13:32.618777 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:13:32.620344 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:13:32.620374 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:13:32.621513 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:13:32.638518 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:13:32.642921 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:13:32.646611 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:13:32.648673 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:13:32.651811 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:13:32.673269 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:13:32.677221 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:13:32.681394 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:13:32.683600 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:13:32.685123 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:13:32.687802 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:13:32.687827 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:13:32.690482 systemd[1]: Starting chronyd.service - NTP client/server... Jun 20 19:13:32.695161 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:13:32.699729 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:13:32.704831 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:13:32.708067 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:13:32.715849 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:13:32.719087 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:13:32.721305 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:13:32.724042 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:13:32.725002 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jun 20 19:13:32.728290 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 20 19:13:32.730157 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 20 19:13:32.737783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:13:32.744085 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:13:32.746116 jq[1693]: false Jun 20 19:13:32.748159 KVP[1699]: KVP starting; pid is:1699 Jun 20 19:13:32.748866 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:13:32.754844 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:13:32.760173 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:13:32.765139 KVP[1699]: KVP LIC Version: 3.1 Jun 20 19:13:32.765713 kernel: hv_utils: KVP IC version 4.0 Jun 20 19:13:32.770921 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:13:32.780524 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:13:32.784596 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:13:32.786152 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Refreshing passwd entry cache Jun 20 19:13:32.785583 oslogin_cache_refresh[1696]: Refreshing passwd entry cache Jun 20 19:13:32.790396 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:13:32.792942 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:13:32.798303 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:13:32.804515 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:13:32.807006 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:13:32.807821 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:13:32.820432 extend-filesystems[1694]: Found /dev/nvme0n1p6 Jun 20 19:13:32.823395 (chronyd)[1688]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 20 19:13:32.829406 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Failure getting users, quitting Jun 20 19:13:32.829406 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:13:32.829406 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Refreshing group entry cache Jun 20 19:13:32.828133 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:13:32.827483 oslogin_cache_refresh[1696]: Failure getting users, quitting Jun 20 19:13:32.828339 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:13:32.827502 oslogin_cache_refresh[1696]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:13:32.827548 oslogin_cache_refresh[1696]: Refreshing group entry cache Jun 20 19:13:32.838663 chronyd[1729]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 20 19:13:32.843782 jq[1712]: true Jun 20 19:13:32.849632 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Failure getting groups, quitting Jun 20 19:13:32.849632 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:13:32.848849 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:13:32.847006 oslogin_cache_refresh[1696]: Failure getting groups, quitting Jun 20 19:13:32.849064 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:13:32.847018 oslogin_cache_refresh[1696]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:13:32.856541 chronyd[1729]: Timezone right/UTC failed leap second check, ignoring Jun 20 19:13:32.856744 chronyd[1729]: Loaded seccomp filter (level 2) Jun 20 19:13:32.863453 systemd[1]: Started chronyd.service - NTP client/server. Jun 20 19:13:32.869727 extend-filesystems[1694]: Found /dev/nvme0n1p9 Jun 20 19:13:32.872649 extend-filesystems[1694]: Checking size of /dev/nvme0n1p9 Jun 20 19:13:32.876212 (ntainerd)[1732]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:13:32.890636 jq[1737]: true Jun 20 19:13:32.889108 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:13:32.889326 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:13:32.896208 tar[1717]: linux-amd64/LICENSE Jun 20 19:13:32.900478 tar[1717]: linux-amd64/helm Jun 20 19:13:32.912007 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:13:32.920133 extend-filesystems[1694]: Old size kept for /dev/nvme0n1p9 Jun 20 19:13:32.927105 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:13:32.927356 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:13:32.931969 update_engine[1711]: I20250620 19:13:32.931892 1711 main.cc:92] Flatcar Update Engine starting Jun 20 19:13:32.984491 dbus-daemon[1691]: [system] SELinux support is enabled Jun 20 19:13:32.984677 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:13:32.991028 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:13:32.991067 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:13:32.993381 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:13:32.993407 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:13:33.000162 systemd-logind[1709]: New seat seat0. Jun 20 19:13:33.004827 update_engine[1711]: I20250620 19:13:33.002691 1711 update_check_scheduler.cc:74] Next update check in 3m3s Jun 20 19:13:33.002942 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:13:33.012174 systemd-logind[1709]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:13:33.032118 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:13:33.036366 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:13:33.101114 bash[1771]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:13:33.109914 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:13:33.113379 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 19:13:33.113541 coreos-metadata[1690]: Jun 20 19:13:33.113 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 19:13:33.125485 coreos-metadata[1690]: Jun 20 19:13:33.125 INFO Fetch successful Jun 20 19:13:33.125569 coreos-metadata[1690]: Jun 20 19:13:33.125 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 20 19:13:33.131425 coreos-metadata[1690]: Jun 20 19:13:33.131 INFO Fetch successful Jun 20 19:13:33.136336 coreos-metadata[1690]: Jun 20 19:13:33.136 INFO Fetching http://168.63.129.16/machine/84e5a33a-225e-445e-9ac9-b378d413aedd/fc61a106%2Dd261%2D4831%2D8266%2D6cf128db3ff3.%5Fci%2D4344.1.0%2Da%2D69d2cbc98d?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 20 19:13:33.141252 coreos-metadata[1690]: Jun 20 19:13:33.141 INFO Fetch successful Jun 20 19:13:33.141593 coreos-metadata[1690]: Jun 20 19:13:33.141 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 20 19:13:33.151726 coreos-metadata[1690]: Jun 20 19:13:33.150 INFO Fetch successful Jun 20 19:13:33.209370 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:13:33.213108 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:13:33.437909 locksmithd[1777]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:13:33.870941 containerd[1732]: time="2025-06-20T19:13:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:13:33.874763 containerd[1732]: time="2025-06-20T19:13:33.873645476Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:13:33.907615 containerd[1732]: time="2025-06-20T19:13:33.907558514Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.738µs" Jun 20 19:13:33.911324 containerd[1732]: time="2025-06-20T19:13:33.909272527Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:13:33.911324 containerd[1732]: time="2025-06-20T19:13:33.909319621Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:13:33.911324 containerd[1732]: time="2025-06-20T19:13:33.909465915Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:13:33.911324 containerd[1732]: time="2025-06-20T19:13:33.909479291Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:13:33.911324 containerd[1732]: time="2025-06-20T19:13:33.909504383Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:13:33.911324 containerd[1732]: time="2025-06-20T19:13:33.909559729Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:13:33.911324 containerd[1732]: time="2025-06-20T19:13:33.909571173Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:13:33.911324 containerd[1732]: time="2025-06-20T19:13:33.909856175Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:13:33.911324 containerd[1732]: time="2025-06-20T19:13:33.909868640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:13:33.911324 containerd[1732]: time="2025-06-20T19:13:33.909880340Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:13:33.911324 containerd[1732]: time="2025-06-20T19:13:33.909888449Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:13:33.911324 containerd[1732]: time="2025-06-20T19:13:33.909942247Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:13:33.911638 containerd[1732]: time="2025-06-20T19:13:33.910150715Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:13:33.911638 containerd[1732]: time="2025-06-20T19:13:33.910175392Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:13:33.911638 containerd[1732]: time="2025-06-20T19:13:33.910186008Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:13:33.911638 containerd[1732]: time="2025-06-20T19:13:33.910221223Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:13:33.911638 containerd[1732]: time="2025-06-20T19:13:33.910511284Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:13:33.911638 containerd[1732]: time="2025-06-20T19:13:33.910555944Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926601448Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926665723Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926683287Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926704053Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926716166Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926728178Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926747531Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926760832Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926779925Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926791698Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926800993Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926813858Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926960399Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:13:33.927720 containerd[1732]: time="2025-06-20T19:13:33.926977687Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:13:33.928078 containerd[1732]: time="2025-06-20T19:13:33.926992077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:13:33.928078 containerd[1732]: time="2025-06-20T19:13:33.927002856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:13:33.928078 containerd[1732]: time="2025-06-20T19:13:33.927017201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:13:33.928078 containerd[1732]: time="2025-06-20T19:13:33.927037698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:13:33.928078 containerd[1732]: time="2025-06-20T19:13:33.927049726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:13:33.928078 containerd[1732]: time="2025-06-20T19:13:33.927059908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:13:33.928078 containerd[1732]: time="2025-06-20T19:13:33.927072093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:13:33.928078 containerd[1732]: time="2025-06-20T19:13:33.927082539Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:13:33.928078 containerd[1732]: time="2025-06-20T19:13:33.927094137Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:13:33.928078 containerd[1732]: time="2025-06-20T19:13:33.927166966Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:13:33.928078 containerd[1732]: time="2025-06-20T19:13:33.927180145Z" level=info msg="Start snapshots syncer" Jun 20 19:13:33.928078 containerd[1732]: time="2025-06-20T19:13:33.927202591Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:13:33.928319 containerd[1732]: time="2025-06-20T19:13:33.927516126Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:13:33.928319 containerd[1732]: time="2025-06-20T19:13:33.927580831Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:13:33.928447 containerd[1732]: time="2025-06-20T19:13:33.927650847Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:13:33.931712 containerd[1732]: time="2025-06-20T19:13:33.931382601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:13:33.931712 containerd[1732]: time="2025-06-20T19:13:33.931421319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:13:33.931712 containerd[1732]: time="2025-06-20T19:13:33.931434775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:13:33.931712 containerd[1732]: time="2025-06-20T19:13:33.931445455Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:13:33.931712 containerd[1732]: time="2025-06-20T19:13:33.931458391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:13:33.931712 containerd[1732]: time="2025-06-20T19:13:33.931469237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:13:33.931712 containerd[1732]: time="2025-06-20T19:13:33.931490997Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:13:33.931712 containerd[1732]: time="2025-06-20T19:13:33.931520906Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:13:33.931712 containerd[1732]: time="2025-06-20T19:13:33.931532497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:13:33.931712 containerd[1732]: time="2025-06-20T19:13:33.931543716Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:13:33.931712 containerd[1732]: time="2025-06-20T19:13:33.931581582Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:13:33.931712 containerd[1732]: time="2025-06-20T19:13:33.931597085Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:13:33.931712 containerd[1732]: time="2025-06-20T19:13:33.931606347Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:13:33.932025 containerd[1732]: time="2025-06-20T19:13:33.931624626Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:13:33.932025 containerd[1732]: time="2025-06-20T19:13:33.931642335Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:13:33.932025 containerd[1732]: time="2025-06-20T19:13:33.931662944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:13:33.932025 containerd[1732]: time="2025-06-20T19:13:33.931674117Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:13:33.934884 sshd_keygen[1740]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:13:33.936359 containerd[1732]: time="2025-06-20T19:13:33.931691311Z" level=info msg="runtime interface created" Jun 20 19:13:33.936359 containerd[1732]: time="2025-06-20T19:13:33.935160646Z" level=info msg="created NRI interface" Jun 20 19:13:33.936359 containerd[1732]: time="2025-06-20T19:13:33.935183558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:13:33.936359 containerd[1732]: time="2025-06-20T19:13:33.935207750Z" level=info msg="Connect containerd service" Jun 20 19:13:33.936359 containerd[1732]: time="2025-06-20T19:13:33.935260207Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:13:33.936359 containerd[1732]: time="2025-06-20T19:13:33.935988940Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:13:33.953320 tar[1717]: linux-amd64/README.md Jun 20 19:13:33.970042 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:13:33.975109 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:13:33.980644 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 20 19:13:33.983808 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:13:33.999877 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:13:34.002900 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:13:34.009916 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:13:34.013830 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 20 19:13:34.032318 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:13:34.036942 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:13:34.040944 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:13:34.044121 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:13:34.468102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:13:34.475021 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:13:34.903364 containerd[1732]: time="2025-06-20T19:13:34.903172669Z" level=info msg="Start subscribing containerd event" Jun 20 19:13:34.903364 containerd[1732]: time="2025-06-20T19:13:34.903235973Z" level=info msg="Start recovering state" Jun 20 19:13:34.903754 containerd[1732]: time="2025-06-20T19:13:34.903391188Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:13:34.903754 containerd[1732]: time="2025-06-20T19:13:34.903464852Z" level=info msg="Start event monitor" Jun 20 19:13:34.903754 containerd[1732]: time="2025-06-20T19:13:34.903480745Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:13:34.903754 containerd[1732]: time="2025-06-20T19:13:34.903489515Z" level=info msg="Start streaming server" Jun 20 19:13:34.903754 containerd[1732]: time="2025-06-20T19:13:34.903499161Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:13:34.903754 containerd[1732]: time="2025-06-20T19:13:34.903506631Z" level=info msg="runtime interface starting up..." Jun 20 19:13:34.903754 containerd[1732]: time="2025-06-20T19:13:34.903512615Z" level=info msg="starting plugins..." Jun 20 19:13:34.903754 containerd[1732]: time="2025-06-20T19:13:34.903534452Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:13:34.903928 containerd[1732]: time="2025-06-20T19:13:34.903764371Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:13:34.903928 containerd[1732]: time="2025-06-20T19:13:34.903826238Z" level=info msg="containerd successfully booted in 1.033622s" Jun 20 19:13:34.904067 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:13:34.907577 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:13:34.911190 systemd[1]: Startup finished in 3.356s (kernel) + 10.962s (initrd) + 21.151s (userspace) = 35.470s. Jun 20 19:13:35.018288 kubelet[1853]: E0620 19:13:35.018246 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:13:35.021145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:13:35.021285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:13:35.021758 systemd[1]: kubelet.service: Consumed 991ms CPU time, 265.1M memory peak. Jun 20 19:13:35.375399 login[1842]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jun 20 19:13:35.375909 login[1841]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:13:35.737430 systemd-logind[1709]: New session 2 of user core. Jun 20 19:13:35.738412 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:13:35.739935 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:13:35.758267 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:13:35.760557 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:13:35.768670 (systemd)[1870]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:13:35.770606 systemd-logind[1709]: New session c1 of user core. Jun 20 19:13:35.948772 systemd[1870]: Queued start job for default target default.target. Jun 20 19:13:35.956660 systemd[1870]: Created slice app.slice - User Application Slice. Jun 20 19:13:35.956714 systemd[1870]: Reached target paths.target - Paths. Jun 20 19:13:35.956760 systemd[1870]: Reached target timers.target - Timers. Jun 20 19:13:35.957844 systemd[1870]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:13:35.966817 systemd[1870]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:13:35.966872 systemd[1870]: Reached target sockets.target - Sockets. Jun 20 19:13:35.966910 systemd[1870]: Reached target basic.target - Basic System. Jun 20 19:13:35.966980 systemd[1870]: Reached target default.target - Main User Target. Jun 20 19:13:35.967006 systemd[1870]: Startup finished in 191ms. Jun 20 19:13:35.967184 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:13:35.974820 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:13:36.377623 login[1842]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:13:36.382586 systemd-logind[1709]: New session 1 of user core. Jun 20 19:13:36.388926 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:13:36.905174 waagent[1837]: 2025-06-20T19:13:36.905092Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jun 20 19:13:36.906831 waagent[1837]: 2025-06-20T19:13:36.906338Z INFO Daemon Daemon OS: flatcar 4344.1.0 Jun 20 19:13:36.907976 waagent[1837]: 2025-06-20T19:13:36.907901Z INFO Daemon Daemon Python: 3.11.12 Jun 20 19:13:36.909192 waagent[1837]: 2025-06-20T19:13:36.909126Z INFO Daemon Daemon Run daemon Jun 20 19:13:36.910344 waagent[1837]: 2025-06-20T19:13:36.910291Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.0' Jun 20 19:13:36.916611 waagent[1837]: 2025-06-20T19:13:36.911036Z INFO Daemon Daemon Using waagent for provisioning Jun 20 19:13:36.916611 waagent[1837]: 2025-06-20T19:13:36.911555Z INFO Daemon Daemon Activate resource disk Jun 20 19:13:36.916611 waagent[1837]: 2025-06-20T19:13:36.911789Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 20 19:13:36.916611 waagent[1837]: 2025-06-20T19:13:36.914003Z INFO Daemon Daemon Found device: None Jun 20 19:13:36.916611 waagent[1837]: 2025-06-20T19:13:36.914107Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 20 19:13:36.916611 waagent[1837]: 2025-06-20T19:13:36.914691Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 20 19:13:36.916611 waagent[1837]: 2025-06-20T19:13:36.915509Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 19:13:36.916611 waagent[1837]: 2025-06-20T19:13:36.915845Z INFO Daemon Daemon Running default provisioning handler Jun 20 19:13:36.925338 waagent[1837]: 2025-06-20T19:13:36.924904Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 20 19:13:36.925722 waagent[1837]: 2025-06-20T19:13:36.925676Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 20 19:13:36.925973 waagent[1837]: 2025-06-20T19:13:36.925953Z INFO Daemon Daemon cloud-init is enabled: False Jun 20 19:13:36.926627 waagent[1837]: 2025-06-20T19:13:36.926609Z INFO Daemon Daemon Copying ovf-env.xml Jun 20 19:13:36.965957 waagent[1837]: 2025-06-20T19:13:36.965874Z INFO Daemon Daemon Successfully mounted dvd Jun 20 19:13:36.992009 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 20 19:13:36.994435 waagent[1837]: 2025-06-20T19:13:36.994373Z INFO Daemon Daemon Detect protocol endpoint Jun 20 19:13:36.995353 waagent[1837]: 2025-06-20T19:13:36.994757Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 19:13:36.995353 waagent[1837]: 2025-06-20T19:13:36.994874Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 20 19:13:36.995353 waagent[1837]: 2025-06-20T19:13:36.995125Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 20 19:13:36.995353 waagent[1837]: 2025-06-20T19:13:36.995292Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 20 19:13:36.995488 waagent[1837]: 2025-06-20T19:13:36.995354Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 20 19:13:37.016603 waagent[1837]: 2025-06-20T19:13:37.016566Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 20 19:13:37.018115 waagent[1837]: 2025-06-20T19:13:37.017333Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 20 19:13:37.018115 waagent[1837]: 2025-06-20T19:13:37.018108Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 20 19:13:37.351975 waagent[1837]: 2025-06-20T19:13:37.351875Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 20 19:13:37.353099 waagent[1837]: 2025-06-20T19:13:37.352443Z INFO Daemon Daemon Forcing an update of the goal state. Jun 20 19:13:37.356466 waagent[1837]: 2025-06-20T19:13:37.356428Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 19:13:37.373185 waagent[1837]: 2025-06-20T19:13:37.373149Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 20 19:13:37.374337 waagent[1837]: 2025-06-20T19:13:37.374279Z INFO Daemon Jun 20 19:13:37.374417 waagent[1837]: 2025-06-20T19:13:37.374386Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 0a2edf44-4215-4101-b9d1-0401f7a9da38 eTag: 4034625542340805510 source: Fabric] Jun 20 19:13:37.380295 waagent[1837]: 2025-06-20T19:13:37.374689Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 20 19:13:37.380295 waagent[1837]: 2025-06-20T19:13:37.374967Z INFO Daemon Jun 20 19:13:37.380295 waagent[1837]: 2025-06-20T19:13:37.375164Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 20 19:13:37.383329 waagent[1837]: 2025-06-20T19:13:37.383301Z INFO Daemon Daemon Downloading artifacts profile blob Jun 20 19:13:37.487052 waagent[1837]: 2025-06-20T19:13:37.486978Z INFO Daemon Downloaded certificate {'thumbprint': '0DBB80063B6230C429D1ED71BF7B720A3D39FAD5', 'hasPrivateKey': True} Jun 20 19:13:37.489984 waagent[1837]: 2025-06-20T19:13:37.489945Z INFO Daemon Fetch goal state completed Jun 20 19:13:37.499471 waagent[1837]: 2025-06-20T19:13:37.499410Z INFO Daemon Daemon Starting provisioning Jun 20 19:13:37.500435 waagent[1837]: 2025-06-20T19:13:37.500406Z INFO Daemon Daemon Handle ovf-env.xml. Jun 20 19:13:37.501459 waagent[1837]: 2025-06-20T19:13:37.501435Z INFO Daemon Daemon Set hostname [ci-4344.1.0-a-69d2cbc98d] Jun 20 19:13:37.651468 waagent[1837]: 2025-06-20T19:13:37.651393Z INFO Daemon Daemon Publish hostname [ci-4344.1.0-a-69d2cbc98d] Jun 20 19:13:37.653831 waagent[1837]: 2025-06-20T19:13:37.653777Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 20 19:13:37.655365 waagent[1837]: 2025-06-20T19:13:37.655335Z INFO Daemon Daemon Primary interface is [eth0] Jun 20 19:13:37.662836 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:37.662846 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:13:37.667380 waagent[1837]: 2025-06-20T19:13:37.664026Z INFO Daemon Daemon Create user account if not exists Jun 20 19:13:37.667380 waagent[1837]: 2025-06-20T19:13:37.664319Z INFO Daemon Daemon User core already exists, skip useradd Jun 20 19:13:37.667380 waagent[1837]: 2025-06-20T19:13:37.664550Z INFO Daemon Daemon Configure sudoer Jun 20 19:13:37.662883 systemd-networkd[1359]: eth0: DHCP lease lost Jun 20 19:13:37.669115 waagent[1837]: 2025-06-20T19:13:37.668858Z INFO Daemon Daemon Configure sshd Jun 20 19:13:37.680073 waagent[1837]: 2025-06-20T19:13:37.680026Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 20 19:13:37.682963 waagent[1837]: 2025-06-20T19:13:37.682906Z INFO Daemon Daemon Deploy ssh public key. Jun 20 19:13:37.696747 systemd-networkd[1359]: eth0: DHCPv4 address 10.200.4.4/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:13:38.781440 waagent[1837]: 2025-06-20T19:13:38.781362Z INFO Daemon Daemon Provisioning complete Jun 20 19:13:38.794459 waagent[1837]: 2025-06-20T19:13:38.794415Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 20 19:13:38.796637 waagent[1837]: 2025-06-20T19:13:38.795162Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 20 19:13:38.796637 waagent[1837]: 2025-06-20T19:13:38.795531Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jun 20 19:13:38.902128 waagent[1920]: 2025-06-20T19:13:38.902031Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jun 20 19:13:38.902517 waagent[1920]: 2025-06-20T19:13:38.902167Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.0 Jun 20 19:13:38.902517 waagent[1920]: 2025-06-20T19:13:38.902208Z INFO ExtHandler ExtHandler Python: 3.11.12 Jun 20 19:13:38.902517 waagent[1920]: 2025-06-20T19:13:38.902246Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jun 20 19:13:39.249541 waagent[1920]: 2025-06-20T19:13:39.249396Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jun 20 19:13:39.249655 waagent[1920]: 2025-06-20T19:13:39.249633Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:13:39.249768 waagent[1920]: 2025-06-20T19:13:39.249691Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:13:39.258925 waagent[1920]: 2025-06-20T19:13:39.258855Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 19:13:39.266804 waagent[1920]: 2025-06-20T19:13:39.266769Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 20 19:13:39.267193 waagent[1920]: 2025-06-20T19:13:39.267166Z INFO ExtHandler Jun 20 19:13:39.267251 waagent[1920]: 2025-06-20T19:13:39.267222Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d58e1a3d-9d15-4c71-9310-11d1889a5199 eTag: 4034625542340805510 source: Fabric] Jun 20 19:13:39.267439 waagent[1920]: 2025-06-20T19:13:39.267418Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 20 19:13:39.267819 waagent[1920]: 2025-06-20T19:13:39.267796Z INFO ExtHandler Jun 20 19:13:39.267858 waagent[1920]: 2025-06-20T19:13:39.267840Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 20 19:13:39.276058 waagent[1920]: 2025-06-20T19:13:39.276030Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 20 19:13:39.726731 waagent[1920]: 2025-06-20T19:13:39.725049Z INFO ExtHandler Downloaded certificate {'thumbprint': '0DBB80063B6230C429D1ED71BF7B720A3D39FAD5', 'hasPrivateKey': True} Jun 20 19:13:39.726731 waagent[1920]: 2025-06-20T19:13:39.725874Z INFO ExtHandler Fetch goal state completed Jun 20 19:13:39.739300 waagent[1920]: 2025-06-20T19:13:39.739234Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jun 20 19:13:39.744133 waagent[1920]: 2025-06-20T19:13:39.744079Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1920 Jun 20 19:13:39.744258 waagent[1920]: 2025-06-20T19:13:39.744234Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 20 19:13:39.744520 waagent[1920]: 2025-06-20T19:13:39.744496Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jun 20 19:13:39.745592 waagent[1920]: 2025-06-20T19:13:39.745560Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 20 19:13:39.745946 waagent[1920]: 2025-06-20T19:13:39.745915Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jun 20 19:13:39.746068 waagent[1920]: 2025-06-20T19:13:39.746044Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jun 20 19:13:39.746460 waagent[1920]: 2025-06-20T19:13:39.746436Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 20 19:13:40.306516 waagent[1920]: 2025-06-20T19:13:40.306471Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 20 19:13:40.306952 waagent[1920]: 2025-06-20T19:13:40.306732Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 20 19:13:40.312847 waagent[1920]: 2025-06-20T19:13:40.312782Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 20 19:13:40.318526 systemd[1]: Reload requested from client PID 1937 ('systemctl') (unit waagent.service)... Jun 20 19:13:40.318540 systemd[1]: Reloading... Jun 20 19:13:40.404759 zram_generator::config[1978]: No configuration found. Jun 20 19:13:40.482370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:13:40.577685 systemd[1]: Reloading finished in 258 ms. Jun 20 19:13:40.604715 waagent[1920]: 2025-06-20T19:13:40.603272Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 20 19:13:40.604715 waagent[1920]: 2025-06-20T19:13:40.603436Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 20 19:13:41.692501 waagent[1920]: 2025-06-20T19:13:41.692408Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 20 19:13:41.692922 waagent[1920]: 2025-06-20T19:13:41.692870Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jun 20 19:13:41.693807 waagent[1920]: 2025-06-20T19:13:41.693729Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 20 19:13:41.693897 waagent[1920]: 2025-06-20T19:13:41.693812Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:13:41.693932 waagent[1920]: 2025-06-20T19:13:41.693899Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:13:41.694124 waagent[1920]: 2025-06-20T19:13:41.694096Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 20 19:13:41.694588 waagent[1920]: 2025-06-20T19:13:41.694560Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 20 19:13:41.694588 waagent[1920]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 20 19:13:41.694588 waagent[1920]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jun 20 19:13:41.694588 waagent[1920]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 20 19:13:41.694588 waagent[1920]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:13:41.694588 waagent[1920]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:13:41.694588 waagent[1920]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:13:41.694787 waagent[1920]: 2025-06-20T19:13:41.694631Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:13:41.694787 waagent[1920]: 2025-06-20T19:13:41.694740Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 20 19:13:41.694858 waagent[1920]: 2025-06-20T19:13:41.694685Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:13:41.694925 waagent[1920]: 2025-06-20T19:13:41.694899Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 20 19:13:41.694973 waagent[1920]: 2025-06-20T19:13:41.694935Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 20 19:13:41.695543 waagent[1920]: 2025-06-20T19:13:41.695509Z INFO EnvHandler ExtHandler Configure routes Jun 20 19:13:41.695640 waagent[1920]: 2025-06-20T19:13:41.695601Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 20 19:13:41.695742 waagent[1920]: 2025-06-20T19:13:41.695686Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 20 19:13:41.695800 waagent[1920]: 2025-06-20T19:13:41.695778Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 20 19:13:41.696095 waagent[1920]: 2025-06-20T19:13:41.696044Z INFO EnvHandler ExtHandler Gateway:None Jun 20 19:13:41.696248 waagent[1920]: 2025-06-20T19:13:41.696230Z INFO EnvHandler ExtHandler Routes:None Jun 20 19:13:41.708130 waagent[1920]: 2025-06-20T19:13:41.708092Z INFO ExtHandler ExtHandler Jun 20 19:13:41.708211 waagent[1920]: 2025-06-20T19:13:41.708155Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e487bde1-ad01-4843-8af0-ffbee5749be5 correlation 9c081494-47a1-445a-a1f0-19182b43a2fa created: 2025-06-20T19:12:30.839287Z] Jun 20 19:13:41.708457 waagent[1920]: 2025-06-20T19:13:41.708435Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 20 19:13:41.708950 waagent[1920]: 2025-06-20T19:13:41.708920Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jun 20 19:13:41.746469 waagent[1920]: 2025-06-20T19:13:41.745970Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jun 20 19:13:41.746469 waagent[1920]: Try `iptables -h' or 'iptables --help' for more information.) Jun 20 19:13:41.746469 waagent[1920]: 2025-06-20T19:13:41.746391Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 86F9DED1-AD5A-447A-8568-8B717445E929;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jun 20 19:13:41.809592 waagent[1920]: 2025-06-20T19:13:41.809516Z INFO MonitorHandler ExtHandler Network interfaces: Jun 20 19:13:41.809592 waagent[1920]: Executing ['ip', '-a', '-o', 'link']: Jun 20 19:13:41.809592 waagent[1920]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 20 19:13:41.809592 waagent[1920]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:c1:32:07 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jun 20 19:13:41.809592 waagent[1920]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:c1:32:07 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jun 20 19:13:41.809592 waagent[1920]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 20 19:13:41.809592 waagent[1920]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 20 19:13:41.809592 waagent[1920]: 2: eth0 inet 10.200.4.4/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 20 19:13:41.809592 waagent[1920]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 20 19:13:41.809592 waagent[1920]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 20 19:13:41.809592 waagent[1920]: 2: eth0 inet6 fe80::6245:bdff:fec1:3207/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 19:13:41.809592 waagent[1920]: 3: enP30832s1 inet6 fe80::6245:bdff:fec1:3207/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 19:13:41.915362 waagent[1920]: 2025-06-20T19:13:41.915306Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jun 20 19:13:41.915362 waagent[1920]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:41.915362 waagent[1920]: pkts bytes target prot opt in out source destination Jun 20 19:13:41.915362 waagent[1920]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:41.915362 waagent[1920]: pkts bytes target prot opt in out source destination Jun 20 19:13:41.915362 waagent[1920]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:41.915362 waagent[1920]: pkts bytes target prot opt in out source destination Jun 20 19:13:41.915362 waagent[1920]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 19:13:41.915362 waagent[1920]: 6 520 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 19:13:41.915362 waagent[1920]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 19:13:41.918210 waagent[1920]: 2025-06-20T19:13:41.918154Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 20 19:13:41.918210 waagent[1920]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:41.918210 waagent[1920]: pkts bytes target prot opt in out source destination Jun 20 19:13:41.918210 waagent[1920]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:41.918210 waagent[1920]: pkts bytes target prot opt in out source destination Jun 20 19:13:41.918210 waagent[1920]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:41.918210 waagent[1920]: pkts bytes target prot opt in out source destination Jun 20 19:13:41.918210 waagent[1920]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 19:13:41.918210 waagent[1920]: 7 580 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 19:13:41.918210 waagent[1920]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 19:13:45.031508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:13:45.033346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:13:47.815528 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:13:47.816900 systemd[1]: Started sshd@0-10.200.4.4:22-10.200.16.10:38566.service - OpenSSH per-connection server daemon (10.200.16.10:38566). Jun 20 19:13:49.084163 sshd[2069]: Accepted publickey for core from 10.200.16.10 port 38566 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:13:49.085294 sshd-session[2069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:49.089826 systemd-logind[1709]: New session 3 of user core. Jun 20 19:13:49.094836 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:13:49.607476 systemd[1]: Started sshd@1-10.200.4.4:22-10.200.16.10:54712.service - OpenSSH per-connection server daemon (10.200.16.10:54712). Jun 20 19:13:50.206262 sshd[2074]: Accepted publickey for core from 10.200.16.10 port 54712 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:13:50.207627 sshd-session[2074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:50.212436 systemd-logind[1709]: New session 4 of user core. Jun 20 19:13:50.221843 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:13:50.633049 sshd[2076]: Connection closed by 10.200.16.10 port 54712 Jun 20 19:13:50.633781 sshd-session[2074]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:50.636743 systemd[1]: sshd@1-10.200.4.4:22-10.200.16.10:54712.service: Deactivated successfully. Jun 20 19:13:50.638486 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:13:50.639883 systemd-logind[1709]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:13:50.641257 systemd-logind[1709]: Removed session 4. Jun 20 19:13:50.755778 systemd[1]: Started sshd@2-10.200.4.4:22-10.200.16.10:54714.service - OpenSSH per-connection server daemon (10.200.16.10:54714). Jun 20 19:13:50.976207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:13:50.988981 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:13:51.022300 kubelet[2089]: E0620 19:13:51.022227 2089 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:13:51.025615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:13:51.025781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:13:51.026132 systemd[1]: kubelet.service: Consumed 143ms CPU time, 108.5M memory peak. Jun 20 19:13:51.354475 sshd[2082]: Accepted publickey for core from 10.200.16.10 port 54714 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:13:51.355905 sshd-session[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:51.360628 systemd-logind[1709]: New session 5 of user core. Jun 20 19:13:51.370863 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:13:51.769916 sshd[2096]: Connection closed by 10.200.16.10 port 54714 Jun 20 19:13:51.770804 sshd-session[2082]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:51.774503 systemd[1]: sshd@2-10.200.4.4:22-10.200.16.10:54714.service: Deactivated successfully. Jun 20 19:13:51.776133 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:13:51.776794 systemd-logind[1709]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:13:51.778039 systemd-logind[1709]: Removed session 5. Jun 20 19:13:51.887013 systemd[1]: Started sshd@3-10.200.4.4:22-10.200.16.10:54720.service - OpenSSH per-connection server daemon (10.200.16.10:54720). Jun 20 19:13:52.486499 sshd[2102]: Accepted publickey for core from 10.200.16.10 port 54720 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:13:52.487908 sshd-session[2102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:52.492684 systemd-logind[1709]: New session 6 of user core. Jun 20 19:13:52.498863 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:13:52.909796 sshd[2104]: Connection closed by 10.200.16.10 port 54720 Jun 20 19:13:52.910578 sshd-session[2102]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:52.914362 systemd[1]: sshd@3-10.200.4.4:22-10.200.16.10:54720.service: Deactivated successfully. Jun 20 19:13:52.915992 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:13:52.916747 systemd-logind[1709]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:13:52.918112 systemd-logind[1709]: Removed session 6. Jun 20 19:13:53.218479 systemd[1]: Started sshd@4-10.200.4.4:22-10.200.16.10:54724.service - OpenSSH per-connection server daemon (10.200.16.10:54724). Jun 20 19:13:53.813052 sshd[2110]: Accepted publickey for core from 10.200.16.10 port 54724 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:13:53.814439 sshd-session[2110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:53.819280 systemd-logind[1709]: New session 7 of user core. Jun 20 19:13:53.823867 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:13:54.250709 sudo[2113]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:13:54.250961 sudo[2113]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:13:54.262830 sudo[2113]: pam_unix(sudo:session): session closed for user root Jun 20 19:13:54.364360 sshd[2112]: Connection closed by 10.200.16.10 port 54724 Jun 20 19:13:54.365330 sshd-session[2110]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:54.369470 systemd[1]: sshd@4-10.200.4.4:22-10.200.16.10:54724.service: Deactivated successfully. Jun 20 19:13:54.371177 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:13:54.371847 systemd-logind[1709]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:13:54.373104 systemd-logind[1709]: Removed session 7. Jun 20 19:13:54.469896 systemd[1]: Started sshd@5-10.200.4.4:22-10.200.16.10:54738.service - OpenSSH per-connection server daemon (10.200.16.10:54738). Jun 20 19:13:55.069634 sshd[2119]: Accepted publickey for core from 10.200.16.10 port 54738 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:13:55.071058 sshd-session[2119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:55.075885 systemd-logind[1709]: New session 8 of user core. Jun 20 19:13:55.080890 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:13:55.395869 sudo[2123]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:13:55.396289 sudo[2123]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:13:55.429368 sudo[2123]: pam_unix(sudo:session): session closed for user root Jun 20 19:13:55.433987 sudo[2122]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:13:55.434213 sudo[2122]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:13:55.442427 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:13:55.475425 augenrules[2145]: No rules Jun 20 19:13:55.476476 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:13:55.476688 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:13:55.477761 sudo[2122]: pam_unix(sudo:session): session closed for user root Jun 20 19:13:55.576096 sshd[2121]: Connection closed by 10.200.16.10 port 54738 Jun 20 19:13:55.576783 sshd-session[2119]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:55.579848 systemd[1]: sshd@5-10.200.4.4:22-10.200.16.10:54738.service: Deactivated successfully. Jun 20 19:13:55.581574 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:13:55.583564 systemd-logind[1709]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:13:55.584469 systemd-logind[1709]: Removed session 8. Jun 20 19:13:55.686047 systemd[1]: Started sshd@6-10.200.4.4:22-10.200.16.10:54748.service - OpenSSH per-connection server daemon (10.200.16.10:54748). Jun 20 19:13:56.289268 sshd[2154]: Accepted publickey for core from 10.200.16.10 port 54748 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:13:56.290623 sshd-session[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:56.295333 systemd-logind[1709]: New session 9 of user core. Jun 20 19:13:56.297852 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:13:56.616038 sudo[2157]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:13:56.616271 sudo[2157]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:13:56.640041 chronyd[1729]: Selected source PHC0 Jun 20 19:14:00.676712 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:14:00.686109 (dockerd)[2175]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:14:01.031294 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:14:01.033049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:02.349941 dockerd[2175]: time="2025-06-20T19:14:02.349880948Z" level=info msg="Starting up" Jun 20 19:14:02.350748 dockerd[2175]: time="2025-06-20T19:14:02.350720818Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:14:07.224132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:07.236959 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:14:07.274302 kubelet[2202]: E0620 19:14:07.274222 2202 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:14:07.276206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:14:07.276346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:14:07.276724 systemd[1]: kubelet.service: Consumed 140ms CPU time, 108.8M memory peak. Jun 20 19:14:10.102513 dockerd[2175]: time="2025-06-20T19:14:10.102444240Z" level=info msg="Loading containers: start." Jun 20 19:14:10.173719 kernel: Initializing XFRM netlink socket Jun 20 19:14:10.527114 systemd-networkd[1359]: docker0: Link UP Jun 20 19:14:10.580108 dockerd[2175]: time="2025-06-20T19:14:10.580048523Z" level=info msg="Loading containers: done." Jun 20 19:14:10.792979 dockerd[2175]: time="2025-06-20T19:14:10.792894250Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:14:10.793172 dockerd[2175]: time="2025-06-20T19:14:10.793065700Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:14:10.793231 dockerd[2175]: time="2025-06-20T19:14:10.793215050Z" level=info msg="Initializing buildkit" Jun 20 19:14:10.843896 dockerd[2175]: time="2025-06-20T19:14:10.843841767Z" level=info msg="Completed buildkit initialization" Jun 20 19:14:10.850342 dockerd[2175]: time="2025-06-20T19:14:10.850295785Z" level=info msg="Daemon has completed initialization" Jun 20 19:14:10.850568 dockerd[2175]: time="2025-06-20T19:14:10.850359426Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:14:10.850645 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:14:12.329635 containerd[1732]: time="2025-06-20T19:14:12.329578674Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 19:14:13.228517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284545733.mount: Deactivated successfully. Jun 20 19:14:13.393654 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 20 19:14:14.357030 containerd[1732]: time="2025-06-20T19:14:14.356972966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:14.359207 containerd[1732]: time="2025-06-20T19:14:14.359167339Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jun 20 19:14:14.362111 containerd[1732]: time="2025-06-20T19:14:14.362067702Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:14.365668 containerd[1732]: time="2025-06-20T19:14:14.365623217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:14.366381 containerd[1732]: time="2025-06-20T19:14:14.366281756Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.036650815s" Jun 20 19:14:14.366381 containerd[1732]: time="2025-06-20T19:14:14.366314416Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 20 19:14:14.369901 containerd[1732]: time="2025-06-20T19:14:14.369872973Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 19:14:15.679262 containerd[1732]: time="2025-06-20T19:14:15.679211691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:15.681966 containerd[1732]: time="2025-06-20T19:14:15.681932912Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jun 20 19:14:15.684673 containerd[1732]: time="2025-06-20T19:14:15.684633596Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:15.688692 containerd[1732]: time="2025-06-20T19:14:15.688646810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:15.689470 containerd[1732]: time="2025-06-20T19:14:15.689327120Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.319418649s" Jun 20 19:14:15.689470 containerd[1732]: time="2025-06-20T19:14:15.689362143Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 20 19:14:15.690043 containerd[1732]: time="2025-06-20T19:14:15.689993278Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 19:14:16.877514 containerd[1732]: time="2025-06-20T19:14:16.877464600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:16.880543 containerd[1732]: time="2025-06-20T19:14:16.880508453Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jun 20 19:14:16.883348 containerd[1732]: time="2025-06-20T19:14:16.883302008Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:16.887476 containerd[1732]: time="2025-06-20T19:14:16.887427915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:16.888215 containerd[1732]: time="2025-06-20T19:14:16.888090779Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.198065034s" Jun 20 19:14:16.888215 containerd[1732]: time="2025-06-20T19:14:16.888121815Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 20 19:14:16.888768 containerd[1732]: time="2025-06-20T19:14:16.888742273Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 19:14:17.281528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:14:17.283512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:17.822987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:17.828930 (kubelet)[2459]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:14:17.869967 kubelet[2459]: E0620 19:14:17.869814 2459 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:14:17.871749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:14:17.871881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:14:17.872232 systemd[1]: kubelet.service: Consumed 137ms CPU time, 109.6M memory peak. Jun 20 19:14:18.448830 update_engine[1711]: I20250620 19:14:18.448740 1711 update_attempter.cc:509] Updating boot flags... Jun 20 19:14:18.743112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount933160692.mount: Deactivated successfully. Jun 20 19:14:19.112036 containerd[1732]: time="2025-06-20T19:14:19.111982022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:19.116843 containerd[1732]: time="2025-06-20T19:14:19.116812818Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jun 20 19:14:19.120401 containerd[1732]: time="2025-06-20T19:14:19.120356922Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:19.124065 containerd[1732]: time="2025-06-20T19:14:19.124021610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:19.124768 containerd[1732]: time="2025-06-20T19:14:19.124373671Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.23559139s" Jun 20 19:14:19.124768 containerd[1732]: time="2025-06-20T19:14:19.124403140Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 20 19:14:19.125002 containerd[1732]: time="2025-06-20T19:14:19.124953859Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:14:19.806834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475434052.mount: Deactivated successfully. Jun 20 19:14:20.745922 containerd[1732]: time="2025-06-20T19:14:20.745870349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:20.748519 containerd[1732]: time="2025-06-20T19:14:20.748478813Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jun 20 19:14:20.753117 containerd[1732]: time="2025-06-20T19:14:20.753061454Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:20.757707 containerd[1732]: time="2025-06-20T19:14:20.757626641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:20.758411 containerd[1732]: time="2025-06-20T19:14:20.758383919Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.63339887s" Jun 20 19:14:20.758469 containerd[1732]: time="2025-06-20T19:14:20.758421144Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:14:20.759276 containerd[1732]: time="2025-06-20T19:14:20.759254290Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:14:21.395572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705011048.mount: Deactivated successfully. Jun 20 19:14:21.418519 containerd[1732]: time="2025-06-20T19:14:21.418470752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:14:21.422087 containerd[1732]: time="2025-06-20T19:14:21.422051252Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 20 19:14:21.424885 containerd[1732]: time="2025-06-20T19:14:21.424843107Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:14:21.428233 containerd[1732]: time="2025-06-20T19:14:21.428191144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:14:21.428769 containerd[1732]: time="2025-06-20T19:14:21.428623488Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 669.342457ms" Jun 20 19:14:21.428769 containerd[1732]: time="2025-06-20T19:14:21.428653605Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:14:21.429263 containerd[1732]: time="2025-06-20T19:14:21.429173965Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 19:14:22.122875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234489637.mount: Deactivated successfully. Jun 20 19:14:23.837736 containerd[1732]: time="2025-06-20T19:14:23.837665786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:23.840871 containerd[1732]: time="2025-06-20T19:14:23.840831451Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jun 20 19:14:23.843666 containerd[1732]: time="2025-06-20T19:14:23.843612467Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:23.847367 containerd[1732]: time="2025-06-20T19:14:23.847308678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:23.848250 containerd[1732]: time="2025-06-20T19:14:23.848055497Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.418852988s" Jun 20 19:14:23.848250 containerd[1732]: time="2025-06-20T19:14:23.848087766Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 20 19:14:26.113765 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:26.113985 systemd[1]: kubelet.service: Consumed 137ms CPU time, 109.6M memory peak. Jun 20 19:14:26.116177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:26.138284 systemd[1]: Reload requested from client PID 2635 ('systemctl') (unit session-9.scope)... Jun 20 19:14:26.138301 systemd[1]: Reloading... Jun 20 19:14:26.234730 zram_generator::config[2680]: No configuration found. Jun 20 19:14:26.333976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:14:26.441837 systemd[1]: Reloading finished in 303 ms. Jun 20 19:14:26.551247 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:14:26.551348 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:14:26.551684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:26.551768 systemd[1]: kubelet.service: Consumed 78ms CPU time, 78M memory peak. Jun 20 19:14:26.553841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:27.072155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:27.082001 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:14:27.118487 kubelet[2747]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:14:27.118487 kubelet[2747]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:14:27.118487 kubelet[2747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:14:27.118903 kubelet[2747]: I0620 19:14:27.118591 2747 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:14:27.298262 kubelet[2747]: I0620 19:14:27.298223 2747 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:14:27.298262 kubelet[2747]: I0620 19:14:27.298250 2747 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:14:27.298514 kubelet[2747]: I0620 19:14:27.298502 2747 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:14:27.326050 kubelet[2747]: E0620 19:14:27.325915 2747 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:27.327565 kubelet[2747]: I0620 19:14:27.327416 2747 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:14:27.335391 kubelet[2747]: I0620 19:14:27.335374 2747 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:14:27.339871 kubelet[2747]: I0620 19:14:27.339754 2747 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:14:27.341576 kubelet[2747]: I0620 19:14:27.341394 2747 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:14:27.341868 kubelet[2747]: I0620 19:14:27.341669 2747 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-69d2cbc98d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:14:27.342001 kubelet[2747]: I0620 19:14:27.341872 2747 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:14:27.342001 kubelet[2747]: I0620 19:14:27.341883 2747 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:14:27.342052 kubelet[2747]: I0620 19:14:27.342021 2747 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:14:27.345420 kubelet[2747]: I0620 19:14:27.345340 2747 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:14:27.345420 kubelet[2747]: I0620 19:14:27.345367 2747 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:14:27.345420 kubelet[2747]: I0620 19:14:27.345392 2747 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:14:27.345537 kubelet[2747]: I0620 19:14:27.345432 2747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:14:27.351657 kubelet[2747]: W0620 19:14:27.351099 2747 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-69d2cbc98d&limit=500&resourceVersion=0": dial tcp 10.200.4.4:6443: connect: connection refused Jun 20 19:14:27.351657 kubelet[2747]: E0620 19:14:27.351158 2747 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-69d2cbc98d&limit=500&resourceVersion=0\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:27.351657 kubelet[2747]: W0620 19:14:27.351485 2747 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.4:6443: connect: connection refused Jun 20 19:14:27.351657 kubelet[2747]: E0620 19:14:27.351516 2747 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:27.351942 kubelet[2747]: I0620 19:14:27.351916 2747 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:14:27.352455 kubelet[2747]: I0620 19:14:27.352351 2747 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:14:27.353486 kubelet[2747]: W0620 19:14:27.353039 2747 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:14:27.356482 kubelet[2747]: I0620 19:14:27.356462 2747 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:14:27.356561 kubelet[2747]: I0620 19:14:27.356507 2747 server.go:1287] "Started kubelet" Jun 20 19:14:27.358072 kubelet[2747]: I0620 19:14:27.357892 2747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:14:27.362074 kubelet[2747]: E0620 19:14:27.360392 2747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.0-a-69d2cbc98d.184ad626cefa5fb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.0-a-69d2cbc98d,UID:ci-4344.1.0-a-69d2cbc98d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.0-a-69d2cbc98d,},FirstTimestamp:2025-06-20 19:14:27.35647532 +0000 UTC m=+0.270718962,LastTimestamp:2025-06-20 19:14:27.35647532 +0000 UTC m=+0.270718962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.0-a-69d2cbc98d,}" Jun 20 19:14:27.362977 kubelet[2747]: I0620 19:14:27.362932 2747 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:14:27.364294 kubelet[2747]: I0620 19:14:27.364192 2747 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:14:27.366328 kubelet[2747]: I0620 19:14:27.364775 2747 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:14:27.366328 kubelet[2747]: I0620 19:14:27.365026 2747 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:14:27.366328 kubelet[2747]: I0620 19:14:27.365180 2747 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:14:27.366328 kubelet[2747]: I0620 19:14:27.365228 2747 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:14:27.366328 kubelet[2747]: E0620 19:14:27.365359 2747 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-69d2cbc98d\" not found" Jun 20 19:14:27.367787 kubelet[2747]: E0620 19:14:27.367765 2747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-69d2cbc98d?timeout=10s\": dial tcp 10.200.4.4:6443: connect: connection refused" interval="200ms" Jun 20 19:14:27.367868 kubelet[2747]: I0620 19:14:27.367802 2747 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:14:27.367915 kubelet[2747]: I0620 19:14:27.367842 2747 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:14:27.368304 kubelet[2747]: I0620 19:14:27.368292 2747 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:14:27.368444 kubelet[2747]: I0620 19:14:27.368432 2747 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:14:27.369567 kubelet[2747]: W0620 19:14:27.369530 2747 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.4:6443: connect: connection refused Jun 20 19:14:27.369782 kubelet[2747]: E0620 19:14:27.369761 2747 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:27.371659 kubelet[2747]: E0620 19:14:27.370926 2747 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:14:27.371659 kubelet[2747]: I0620 19:14:27.371300 2747 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:14:27.398035 kubelet[2747]: I0620 19:14:27.398011 2747 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:14:27.398035 kubelet[2747]: I0620 19:14:27.398031 2747 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:14:27.398158 kubelet[2747]: I0620 19:14:27.398045 2747 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:14:27.399451 kubelet[2747]: I0620 19:14:27.399425 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:14:27.400786 kubelet[2747]: I0620 19:14:27.400764 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:14:27.400850 kubelet[2747]: I0620 19:14:27.400791 2747 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:14:27.400850 kubelet[2747]: I0620 19:14:27.400809 2747 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:14:27.400850 kubelet[2747]: I0620 19:14:27.400816 2747 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:14:27.402902 kubelet[2747]: E0620 19:14:27.400858 2747 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:14:27.402902 kubelet[2747]: W0620 19:14:27.402078 2747 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.4:6443: connect: connection refused Jun 20 19:14:27.402902 kubelet[2747]: E0620 19:14:27.402214 2747 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:27.407235 kubelet[2747]: I0620 19:14:27.407219 2747 policy_none.go:49] "None policy: Start" Jun 20 19:14:27.407299 kubelet[2747]: I0620 19:14:27.407240 2747 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:14:27.407299 kubelet[2747]: I0620 19:14:27.407252 2747 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:14:27.414510 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:14:27.424088 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:14:27.427150 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:14:27.443358 kubelet[2747]: I0620 19:14:27.443275 2747 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:14:27.443474 kubelet[2747]: I0620 19:14:27.443463 2747 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:14:27.443518 kubelet[2747]: I0620 19:14:27.443479 2747 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:14:27.444255 kubelet[2747]: I0620 19:14:27.444228 2747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:14:27.445964 kubelet[2747]: E0620 19:14:27.445945 2747 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:14:27.446107 kubelet[2747]: E0620 19:14:27.446086 2747 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.0-a-69d2cbc98d\" not found" Jun 20 19:14:27.509933 systemd[1]: Created slice kubepods-burstable-podf891574fe4235666ea29b7abd16072f0.slice - libcontainer container kubepods-burstable-podf891574fe4235666ea29b7abd16072f0.slice. Jun 20 19:14:27.518957 kubelet[2747]: E0620 19:14:27.518374 2747 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-69d2cbc98d\" not found" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.521014 systemd[1]: Created slice kubepods-burstable-pod5e49d406eccac60f94dcdaced78d29f0.slice - libcontainer container kubepods-burstable-pod5e49d406eccac60f94dcdaced78d29f0.slice. Jun 20 19:14:27.522912 kubelet[2747]: E0620 19:14:27.522890 2747 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-69d2cbc98d\" not found" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.524573 systemd[1]: Created slice kubepods-burstable-podb1bb35ef9fdc89f5ec34bc2b6319b320.slice - libcontainer container kubepods-burstable-podb1bb35ef9fdc89f5ec34bc2b6319b320.slice. Jun 20 19:14:27.526303 kubelet[2747]: E0620 19:14:27.526266 2747 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-69d2cbc98d\" not found" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.546060 kubelet[2747]: I0620 19:14:27.546046 2747 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.546447 kubelet[2747]: E0620 19:14:27.546428 2747 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.4:6443/api/v1/nodes\": dial tcp 10.200.4.4:6443: connect: connection refused" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.568972 kubelet[2747]: E0620 19:14:27.568929 2747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-69d2cbc98d?timeout=10s\": dial tcp 10.200.4.4:6443: connect: connection refused" interval="400ms" Jun 20 19:14:27.568972 kubelet[2747]: I0620 19:14:27.568951 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1bb35ef9fdc89f5ec34bc2b6319b320-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-69d2cbc98d\" (UID: \"b1bb35ef9fdc89f5ec34bc2b6319b320\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.569147 kubelet[2747]: I0620 19:14:27.568984 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e49d406eccac60f94dcdaced78d29f0-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-69d2cbc98d\" (UID: \"5e49d406eccac60f94dcdaced78d29f0\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.569147 kubelet[2747]: I0620 19:14:27.569004 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e49d406eccac60f94dcdaced78d29f0-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-69d2cbc98d\" (UID: \"5e49d406eccac60f94dcdaced78d29f0\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.569147 kubelet[2747]: I0620 19:14:27.569022 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e49d406eccac60f94dcdaced78d29f0-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-69d2cbc98d\" (UID: \"5e49d406eccac60f94dcdaced78d29f0\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.569147 kubelet[2747]: I0620 19:14:27.569039 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1bb35ef9fdc89f5ec34bc2b6319b320-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-69d2cbc98d\" (UID: \"b1bb35ef9fdc89f5ec34bc2b6319b320\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.569147 kubelet[2747]: I0620 19:14:27.569058 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1bb35ef9fdc89f5ec34bc2b6319b320-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-69d2cbc98d\" (UID: \"b1bb35ef9fdc89f5ec34bc2b6319b320\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.569288 kubelet[2747]: I0620 19:14:27.569089 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5e49d406eccac60f94dcdaced78d29f0-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-69d2cbc98d\" (UID: \"5e49d406eccac60f94dcdaced78d29f0\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.569288 kubelet[2747]: I0620 19:14:27.569110 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e49d406eccac60f94dcdaced78d29f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-69d2cbc98d\" (UID: \"5e49d406eccac60f94dcdaced78d29f0\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.569288 kubelet[2747]: I0620 19:14:27.569129 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f891574fe4235666ea29b7abd16072f0-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-69d2cbc98d\" (UID: \"f891574fe4235666ea29b7abd16072f0\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.749362 kubelet[2747]: I0620 19:14:27.749228 2747 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.749734 kubelet[2747]: E0620 19:14:27.749689 2747 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.4:6443/api/v1/nodes\": dial tcp 10.200.4.4:6443: connect: connection refused" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:27.820232 containerd[1732]: time="2025-06-20T19:14:27.820176326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-69d2cbc98d,Uid:f891574fe4235666ea29b7abd16072f0,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:27.823649 containerd[1732]: time="2025-06-20T19:14:27.823611092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-69d2cbc98d,Uid:5e49d406eccac60f94dcdaced78d29f0,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:27.827580 containerd[1732]: time="2025-06-20T19:14:27.827547297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-69d2cbc98d,Uid:b1bb35ef9fdc89f5ec34bc2b6319b320,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:27.908280 containerd[1732]: time="2025-06-20T19:14:27.908225419Z" level=info msg="connecting to shim 16dcd6d0d5e9fc028f6f39d2e93b177415f8367792d7efa22bcc37a07b3adbe5" address="unix:///run/containerd/s/352a7d996d136bd67bed0e339a6b2c2d0d4359f1f0989b8b702a858bc1cc2598" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:27.912813 containerd[1732]: time="2025-06-20T19:14:27.912060909Z" level=info msg="connecting to shim f6df23a01cd62e62d22ac8087c3bd2dc5d7c20e2f940cda8be7ba57e817909c8" address="unix:///run/containerd/s/43daa5f63d276507d8b82b9b7b69e930be8b9424d065fd10f752d57f6cdc6770" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:27.922262 containerd[1732]: time="2025-06-20T19:14:27.921812058Z" level=info msg="connecting to shim 42c79ea31bcb80af676b7c1ef55f48583c636a0887d3728dbaca1490a7ce699b" address="unix:///run/containerd/s/236fb703244d3022520c40f11db6b88b2863c4ea1359501ef7151fe38ab9bd08" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:27.952281 kubelet[2747]: E0620 19:14:27.951768 2747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.0-a-69d2cbc98d.184ad626cefa5fb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.0-a-69d2cbc98d,UID:ci-4344.1.0-a-69d2cbc98d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.0-a-69d2cbc98d,},FirstTimestamp:2025-06-20 19:14:27.35647532 +0000 UTC m=+0.270718962,LastTimestamp:2025-06-20 19:14:27.35647532 +0000 UTC m=+0.270718962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.0-a-69d2cbc98d,}" Jun 20 19:14:27.952931 systemd[1]: Started cri-containerd-16dcd6d0d5e9fc028f6f39d2e93b177415f8367792d7efa22bcc37a07b3adbe5.scope - libcontainer container 16dcd6d0d5e9fc028f6f39d2e93b177415f8367792d7efa22bcc37a07b3adbe5. Jun 20 19:14:27.964860 systemd[1]: Started cri-containerd-f6df23a01cd62e62d22ac8087c3bd2dc5d7c20e2f940cda8be7ba57e817909c8.scope - libcontainer container f6df23a01cd62e62d22ac8087c3bd2dc5d7c20e2f940cda8be7ba57e817909c8. Jun 20 19:14:27.968744 systemd[1]: Started cri-containerd-42c79ea31bcb80af676b7c1ef55f48583c636a0887d3728dbaca1490a7ce699b.scope - libcontainer container 42c79ea31bcb80af676b7c1ef55f48583c636a0887d3728dbaca1490a7ce699b. Jun 20 19:14:27.969968 kubelet[2747]: E0620 19:14:27.969916 2747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-69d2cbc98d?timeout=10s\": dial tcp 10.200.4.4:6443: connect: connection refused" interval="800ms" Jun 20 19:14:28.036533 containerd[1732]: time="2025-06-20T19:14:28.036386830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-69d2cbc98d,Uid:5e49d406eccac60f94dcdaced78d29f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6df23a01cd62e62d22ac8087c3bd2dc5d7c20e2f940cda8be7ba57e817909c8\"" Jun 20 19:14:28.041464 containerd[1732]: time="2025-06-20T19:14:28.041284761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-69d2cbc98d,Uid:b1bb35ef9fdc89f5ec34bc2b6319b320,Namespace:kube-system,Attempt:0,} returns sandbox id \"42c79ea31bcb80af676b7c1ef55f48583c636a0887d3728dbaca1490a7ce699b\"" Jun 20 19:14:28.044566 containerd[1732]: time="2025-06-20T19:14:28.043782000Z" level=info msg="CreateContainer within sandbox \"f6df23a01cd62e62d22ac8087c3bd2dc5d7c20e2f940cda8be7ba57e817909c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:14:28.044723 containerd[1732]: time="2025-06-20T19:14:28.044678984Z" level=info msg="CreateContainer within sandbox \"42c79ea31bcb80af676b7c1ef55f48583c636a0887d3728dbaca1490a7ce699b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:14:28.053292 containerd[1732]: time="2025-06-20T19:14:28.053261618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-69d2cbc98d,Uid:f891574fe4235666ea29b7abd16072f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"16dcd6d0d5e9fc028f6f39d2e93b177415f8367792d7efa22bcc37a07b3adbe5\"" Jun 20 19:14:28.054961 containerd[1732]: time="2025-06-20T19:14:28.054926995Z" level=info msg="CreateContainer within sandbox \"16dcd6d0d5e9fc028f6f39d2e93b177415f8367792d7efa22bcc37a07b3adbe5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:14:28.098835 containerd[1732]: time="2025-06-20T19:14:28.098776227Z" level=info msg="Container 73bdcf165f1f0b6e4c18e3a3e0e08db1f244054b258a4abe37b653bbeee1f299: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:28.112232 containerd[1732]: time="2025-06-20T19:14:28.112183574Z" level=info msg="Container c46bed91e671253047ebe38b435913c585a0ddc4ae62eefdc7931d9f065dba07: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:28.136268 containerd[1732]: time="2025-06-20T19:14:28.136199025Z" level=info msg="CreateContainer within sandbox \"f6df23a01cd62e62d22ac8087c3bd2dc5d7c20e2f940cda8be7ba57e817909c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"73bdcf165f1f0b6e4c18e3a3e0e08db1f244054b258a4abe37b653bbeee1f299\"" Jun 20 19:14:28.136920 containerd[1732]: time="2025-06-20T19:14:28.136895419Z" level=info msg="StartContainer for \"73bdcf165f1f0b6e4c18e3a3e0e08db1f244054b258a4abe37b653bbeee1f299\"" Jun 20 19:14:28.137816 containerd[1732]: time="2025-06-20T19:14:28.137785975Z" level=info msg="connecting to shim 73bdcf165f1f0b6e4c18e3a3e0e08db1f244054b258a4abe37b653bbeee1f299" address="unix:///run/containerd/s/43daa5f63d276507d8b82b9b7b69e930be8b9424d065fd10f752d57f6cdc6770" protocol=ttrpc version=3 Jun 20 19:14:28.139725 containerd[1732]: time="2025-06-20T19:14:28.139470129Z" level=info msg="Container 8b98001f0ace1058e15db4f540b33c49d71c96309a8f08c55a93deb1f52bca39: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:28.154137 containerd[1732]: time="2025-06-20T19:14:28.154108596Z" level=info msg="CreateContainer within sandbox \"42c79ea31bcb80af676b7c1ef55f48583c636a0887d3728dbaca1490a7ce699b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c46bed91e671253047ebe38b435913c585a0ddc4ae62eefdc7931d9f065dba07\"" Jun 20 19:14:28.154553 kubelet[2747]: I0620 19:14:28.154529 2747 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:28.155148 kubelet[2747]: E0620 19:14:28.155121 2747 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.4:6443/api/v1/nodes\": dial tcp 10.200.4.4:6443: connect: connection refused" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:28.155253 systemd[1]: Started cri-containerd-73bdcf165f1f0b6e4c18e3a3e0e08db1f244054b258a4abe37b653bbeee1f299.scope - libcontainer container 73bdcf165f1f0b6e4c18e3a3e0e08db1f244054b258a4abe37b653bbeee1f299. Jun 20 19:14:28.155867 containerd[1732]: time="2025-06-20T19:14:28.155321241Z" level=info msg="StartContainer for \"c46bed91e671253047ebe38b435913c585a0ddc4ae62eefdc7931d9f065dba07\"" Jun 20 19:14:28.157547 containerd[1732]: time="2025-06-20T19:14:28.157258794Z" level=info msg="connecting to shim c46bed91e671253047ebe38b435913c585a0ddc4ae62eefdc7931d9f065dba07" address="unix:///run/containerd/s/236fb703244d3022520c40f11db6b88b2863c4ea1359501ef7151fe38ab9bd08" protocol=ttrpc version=3 Jun 20 19:14:28.163504 containerd[1732]: time="2025-06-20T19:14:28.163443236Z" level=info msg="CreateContainer within sandbox \"16dcd6d0d5e9fc028f6f39d2e93b177415f8367792d7efa22bcc37a07b3adbe5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8b98001f0ace1058e15db4f540b33c49d71c96309a8f08c55a93deb1f52bca39\"" Jun 20 19:14:28.164090 containerd[1732]: time="2025-06-20T19:14:28.164069955Z" level=info msg="StartContainer for \"8b98001f0ace1058e15db4f540b33c49d71c96309a8f08c55a93deb1f52bca39\"" Jun 20 19:14:28.167250 containerd[1732]: time="2025-06-20T19:14:28.167222223Z" level=info msg="connecting to shim 8b98001f0ace1058e15db4f540b33c49d71c96309a8f08c55a93deb1f52bca39" address="unix:///run/containerd/s/352a7d996d136bd67bed0e339a6b2c2d0d4359f1f0989b8b702a858bc1cc2598" protocol=ttrpc version=3 Jun 20 19:14:28.182739 kubelet[2747]: W0620 19:14:28.182416 2747 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.4:6443: connect: connection refused Jun 20 19:14:28.183729 kubelet[2747]: E0620 19:14:28.183346 2747 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:28.189877 systemd[1]: Started cri-containerd-c46bed91e671253047ebe38b435913c585a0ddc4ae62eefdc7931d9f065dba07.scope - libcontainer container c46bed91e671253047ebe38b435913c585a0ddc4ae62eefdc7931d9f065dba07. Jun 20 19:14:28.194036 systemd[1]: Started cri-containerd-8b98001f0ace1058e15db4f540b33c49d71c96309a8f08c55a93deb1f52bca39.scope - libcontainer container 8b98001f0ace1058e15db4f540b33c49d71c96309a8f08c55a93deb1f52bca39. Jun 20 19:14:28.237724 containerd[1732]: time="2025-06-20T19:14:28.237657125Z" level=info msg="StartContainer for \"73bdcf165f1f0b6e4c18e3a3e0e08db1f244054b258a4abe37b653bbeee1f299\" returns successfully" Jun 20 19:14:28.269690 containerd[1732]: time="2025-06-20T19:14:28.269648783Z" level=info msg="StartContainer for \"8b98001f0ace1058e15db4f540b33c49d71c96309a8f08c55a93deb1f52bca39\" returns successfully" Jun 20 19:14:28.285252 containerd[1732]: time="2025-06-20T19:14:28.285211950Z" level=info msg="StartContainer for \"c46bed91e671253047ebe38b435913c585a0ddc4ae62eefdc7931d9f065dba07\" returns successfully" Jun 20 19:14:28.410830 kubelet[2747]: E0620 19:14:28.409179 2747 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-69d2cbc98d\" not found" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:28.420721 kubelet[2747]: E0620 19:14:28.419466 2747 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-69d2cbc98d\" not found" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:28.425201 kubelet[2747]: E0620 19:14:28.425181 2747 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-69d2cbc98d\" not found" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:28.958492 kubelet[2747]: I0620 19:14:28.958462 2747 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:29.427329 kubelet[2747]: E0620 19:14:29.427295 2747 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-69d2cbc98d\" not found" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:29.427719 kubelet[2747]: E0620 19:14:29.427686 2747 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-69d2cbc98d\" not found" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:29.997521 kubelet[2747]: E0620 19:14:29.997478 2747 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.0-a-69d2cbc98d\" not found" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:30.183295 kubelet[2747]: I0620 19:14:30.183248 2747 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:30.265849 kubelet[2747]: I0620 19:14:30.265686 2747 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:30.269984 kubelet[2747]: E0620 19:14:30.269947 2747 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-69d2cbc98d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:30.269984 kubelet[2747]: I0620 19:14:30.269982 2747 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:30.271735 kubelet[2747]: E0620 19:14:30.271692 2747 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.0-a-69d2cbc98d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:30.271735 kubelet[2747]: I0620 19:14:30.271729 2747 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:30.273275 kubelet[2747]: E0620 19:14:30.273137 2747 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-69d2cbc98d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:30.353249 kubelet[2747]: I0620 19:14:30.353187 2747 apiserver.go:52] "Watching apiserver" Jun 20 19:14:30.368310 kubelet[2747]: I0620 19:14:30.368272 2747 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:14:32.023474 systemd[1]: Reload requested from client PID 3020 ('systemctl') (unit session-9.scope)... Jun 20 19:14:32.023490 systemd[1]: Reloading... Jun 20 19:14:32.111731 zram_generator::config[3062]: No configuration found. Jun 20 19:14:32.202136 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:14:32.312412 systemd[1]: Reloading finished in 288 ms. Jun 20 19:14:32.340456 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:32.358665 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:14:32.358937 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:32.358993 systemd[1]: kubelet.service: Consumed 598ms CPU time, 129.1M memory peak. Jun 20 19:14:32.360600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:32.862208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:32.872008 (kubelet)[3133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:14:32.961396 kubelet[3133]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:14:32.961396 kubelet[3133]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:14:32.961396 kubelet[3133]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:14:32.961826 kubelet[3133]: I0620 19:14:32.961545 3133 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:14:32.973006 kubelet[3133]: I0620 19:14:32.972236 3133 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:14:32.973006 kubelet[3133]: I0620 19:14:32.972259 3133 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:14:32.973006 kubelet[3133]: I0620 19:14:32.972431 3133 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:14:32.974058 kubelet[3133]: I0620 19:14:32.974038 3133 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:14:32.976751 kubelet[3133]: I0620 19:14:32.976730 3133 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:14:32.979713 kubelet[3133]: I0620 19:14:32.979682 3133 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:14:32.981972 kubelet[3133]: I0620 19:14:32.981951 3133 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:14:32.982142 kubelet[3133]: I0620 19:14:32.982113 3133 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:14:32.982300 kubelet[3133]: I0620 19:14:32.982139 3133 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-69d2cbc98d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:14:32.982399 kubelet[3133]: I0620 19:14:32.982308 3133 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:14:32.982399 kubelet[3133]: I0620 19:14:32.982318 3133 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:14:32.982399 kubelet[3133]: I0620 19:14:32.982364 3133 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:14:32.982487 kubelet[3133]: I0620 19:14:32.982481 3133 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:14:32.982514 kubelet[3133]: I0620 19:14:32.982500 3133 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:14:32.982728 kubelet[3133]: I0620 19:14:32.982521 3133 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:14:32.982728 kubelet[3133]: I0620 19:14:32.982532 3133 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:14:32.984052 kubelet[3133]: I0620 19:14:32.984031 3133 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:14:32.985712 kubelet[3133]: I0620 19:14:32.984569 3133 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:14:32.986298 kubelet[3133]: I0620 19:14:32.986282 3133 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:14:32.986341 kubelet[3133]: I0620 19:14:32.986330 3133 server.go:1287] "Started kubelet" Jun 20 19:14:32.989646 kubelet[3133]: I0620 19:14:32.989621 3133 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:14:32.996724 kubelet[3133]: I0620 19:14:32.995834 3133 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:14:32.997195 kubelet[3133]: I0620 19:14:32.997185 3133 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:14:32.997386 kubelet[3133]: E0620 19:14:32.997376 3133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-69d2cbc98d\" not found" Jun 20 19:14:32.997921 kubelet[3133]: I0620 19:14:32.997909 3133 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:14:32.998853 kubelet[3133]: I0620 19:14:32.998841 3133 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:14:33.003631 kubelet[3133]: I0620 19:14:33.003603 3133 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:14:33.004560 kubelet[3133]: I0620 19:14:33.004543 3133 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:14:33.007722 kubelet[3133]: I0620 19:14:33.005542 3133 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:14:33.008021 kubelet[3133]: I0620 19:14:33.008009 3133 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:14:33.008130 kubelet[3133]: I0620 19:14:33.008117 3133 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:14:33.008203 kubelet[3133]: I0620 19:14:33.008188 3133 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:14:33.011096 kubelet[3133]: I0620 19:14:33.011058 3133 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:14:33.013996 kubelet[3133]: I0620 19:14:33.013979 3133 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:14:33.014773 kubelet[3133]: I0620 19:14:33.014082 3133 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:14:33.014880 kubelet[3133]: I0620 19:14:33.014869 3133 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:14:33.014923 kubelet[3133]: I0620 19:14:33.014917 3133 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:14:33.015007 kubelet[3133]: E0620 19:14:33.014994 3133 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:14:33.029682 kubelet[3133]: I0620 19:14:33.029665 3133 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:14:33.079779 kubelet[3133]: I0620 19:14:33.079751 3133 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:14:33.079779 kubelet[3133]: I0620 19:14:33.079767 3133 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:14:33.079947 kubelet[3133]: I0620 19:14:33.079796 3133 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:14:33.079973 kubelet[3133]: I0620 19:14:33.079949 3133 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:14:33.079973 kubelet[3133]: I0620 19:14:33.079957 3133 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:14:33.080019 kubelet[3133]: I0620 19:14:33.079977 3133 policy_none.go:49] "None policy: Start" Jun 20 19:14:33.080019 kubelet[3133]: I0620 19:14:33.079987 3133 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:14:33.080019 kubelet[3133]: I0620 19:14:33.079995 3133 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:14:33.080108 kubelet[3133]: I0620 19:14:33.080096 3133 state_mem.go:75] "Updated machine memory state" Jun 20 19:14:33.083611 kubelet[3133]: I0620 19:14:33.083275 3133 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:14:33.083611 kubelet[3133]: I0620 19:14:33.083432 3133 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:14:33.083611 kubelet[3133]: I0620 19:14:33.083441 3133 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:14:33.084058 kubelet[3133]: I0620 19:14:33.084046 3133 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:14:33.087048 kubelet[3133]: E0620 19:14:33.086782 3133 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:14:33.116183 kubelet[3133]: I0620 19:14:33.115684 3133 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.117003 kubelet[3133]: I0620 19:14:33.115820 3133 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.117284 kubelet[3133]: I0620 19:14:33.115904 3133 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.123933 kubelet[3133]: W0620 19:14:33.123903 3133 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:14:33.127524 kubelet[3133]: W0620 19:14:33.127355 3133 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:14:33.127848 kubelet[3133]: W0620 19:14:33.127503 3133 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:14:33.187445 kubelet[3133]: I0620 19:14:33.187273 3133 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.200000 kubelet[3133]: I0620 19:14:33.199950 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e49d406eccac60f94dcdaced78d29f0-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-69d2cbc98d\" (UID: \"5e49d406eccac60f94dcdaced78d29f0\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.200192 kubelet[3133]: I0620 19:14:33.200154 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e49d406eccac60f94dcdaced78d29f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-69d2cbc98d\" (UID: \"5e49d406eccac60f94dcdaced78d29f0\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.200228 kubelet[3133]: I0620 19:14:33.200187 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f891574fe4235666ea29b7abd16072f0-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-69d2cbc98d\" (UID: \"f891574fe4235666ea29b7abd16072f0\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.200228 kubelet[3133]: I0620 19:14:33.200219 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1bb35ef9fdc89f5ec34bc2b6319b320-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-69d2cbc98d\" (UID: \"b1bb35ef9fdc89f5ec34bc2b6319b320\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.200480 kubelet[3133]: I0620 19:14:33.200239 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1bb35ef9fdc89f5ec34bc2b6319b320-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-69d2cbc98d\" (UID: \"b1bb35ef9fdc89f5ec34bc2b6319b320\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.200480 kubelet[3133]: I0620 19:14:33.200260 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1bb35ef9fdc89f5ec34bc2b6319b320-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-69d2cbc98d\" (UID: \"b1bb35ef9fdc89f5ec34bc2b6319b320\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.200480 kubelet[3133]: I0620 19:14:33.200280 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e49d406eccac60f94dcdaced78d29f0-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-69d2cbc98d\" (UID: \"5e49d406eccac60f94dcdaced78d29f0\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.200480 kubelet[3133]: I0620 19:14:33.200311 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5e49d406eccac60f94dcdaced78d29f0-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-69d2cbc98d\" (UID: \"5e49d406eccac60f94dcdaced78d29f0\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.200480 kubelet[3133]: I0620 19:14:33.200336 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e49d406eccac60f94dcdaced78d29f0-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-69d2cbc98d\" (UID: \"5e49d406eccac60f94dcdaced78d29f0\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.201853 kubelet[3133]: I0620 19:14:33.201779 3133 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.203318 kubelet[3133]: I0620 19:14:33.201961 3133 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:14:33.983914 kubelet[3133]: I0620 19:14:33.983878 3133 apiserver.go:52] "Watching apiserver" Jun 20 19:14:33.999792 kubelet[3133]: I0620 19:14:33.999750 3133 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:14:34.074532 kubelet[3133]: I0620 19:14:34.074412 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.0-a-69d2cbc98d" podStartSLOduration=1.074393487 podStartE2EDuration="1.074393487s" podCreationTimestamp="2025-06-20 19:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:14:34.074118447 +0000 UTC m=+1.198258263" watchObservedRunningTime="2025-06-20 19:14:34.074393487 +0000 UTC m=+1.198533289" Jun 20 19:14:34.083666 kubelet[3133]: I0620 19:14:34.083618 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.0-a-69d2cbc98d" podStartSLOduration=1.083599433 podStartE2EDuration="1.083599433s" podCreationTimestamp="2025-06-20 19:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:14:34.083433788 +0000 UTC m=+1.207573625" watchObservedRunningTime="2025-06-20 19:14:34.083599433 +0000 UTC m=+1.207739244" Jun 20 19:14:34.103530 kubelet[3133]: I0620 19:14:34.103465 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-69d2cbc98d" podStartSLOduration=1.103448556 podStartE2EDuration="1.103448556s" podCreationTimestamp="2025-06-20 19:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:14:34.093968675 +0000 UTC m=+1.218108516" watchObservedRunningTime="2025-06-20 19:14:34.103448556 +0000 UTC m=+1.227588371" Jun 20 19:14:37.225691 kubelet[3133]: I0620 19:14:37.225649 3133 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:14:37.226202 kubelet[3133]: I0620 19:14:37.226157 3133 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:14:37.226239 containerd[1732]: time="2025-06-20T19:14:37.225964394Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:14:38.219005 systemd[1]: Created slice kubepods-besteffort-pod96fbf73f_7953_41d3_bb56_b24686fdd3bd.slice - libcontainer container kubepods-besteffort-pod96fbf73f_7953_41d3_bb56_b24686fdd3bd.slice. Jun 20 19:14:38.229572 kubelet[3133]: I0620 19:14:38.229452 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/96fbf73f-7953-41d3-bb56-b24686fdd3bd-kube-proxy\") pod \"kube-proxy-5nz4v\" (UID: \"96fbf73f-7953-41d3-bb56-b24686fdd3bd\") " pod="kube-system/kube-proxy-5nz4v" Jun 20 19:14:38.230487 kubelet[3133]: I0620 19:14:38.229895 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96fbf73f-7953-41d3-bb56-b24686fdd3bd-xtables-lock\") pod \"kube-proxy-5nz4v\" (UID: \"96fbf73f-7953-41d3-bb56-b24686fdd3bd\") " pod="kube-system/kube-proxy-5nz4v" Jun 20 19:14:38.230487 kubelet[3133]: I0620 19:14:38.229919 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96fbf73f-7953-41d3-bb56-b24686fdd3bd-lib-modules\") pod \"kube-proxy-5nz4v\" (UID: \"96fbf73f-7953-41d3-bb56-b24686fdd3bd\") " pod="kube-system/kube-proxy-5nz4v" Jun 20 19:14:38.230487 kubelet[3133]: I0620 19:14:38.230037 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcw7n\" (UniqueName: \"kubernetes.io/projected/96fbf73f-7953-41d3-bb56-b24686fdd3bd-kube-api-access-tcw7n\") pod \"kube-proxy-5nz4v\" (UID: \"96fbf73f-7953-41d3-bb56-b24686fdd3bd\") " pod="kube-system/kube-proxy-5nz4v" Jun 20 19:14:38.421370 systemd[1]: Created slice kubepods-besteffort-pod6426b4f6_2140_4349_b4c8_e0e7a23a6e77.slice - libcontainer container kubepods-besteffort-pod6426b4f6_2140_4349_b4c8_e0e7a23a6e77.slice. Jun 20 19:14:38.430647 kubelet[3133]: I0620 19:14:38.430611 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6426b4f6-2140-4349-b4c8-e0e7a23a6e77-var-lib-calico\") pod \"tigera-operator-68f7c7984d-t44s6\" (UID: \"6426b4f6-2140-4349-b4c8-e0e7a23a6e77\") " pod="tigera-operator/tigera-operator-68f7c7984d-t44s6" Jun 20 19:14:38.430787 kubelet[3133]: I0620 19:14:38.430656 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9gqw\" (UniqueName: \"kubernetes.io/projected/6426b4f6-2140-4349-b4c8-e0e7a23a6e77-kube-api-access-k9gqw\") pod \"tigera-operator-68f7c7984d-t44s6\" (UID: \"6426b4f6-2140-4349-b4c8-e0e7a23a6e77\") " pod="tigera-operator/tigera-operator-68f7c7984d-t44s6" Jun 20 19:14:38.528187 containerd[1732]: time="2025-06-20T19:14:38.525982347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5nz4v,Uid:96fbf73f-7953-41d3-bb56-b24686fdd3bd,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:38.565514 containerd[1732]: time="2025-06-20T19:14:38.565428471Z" level=info msg="connecting to shim bcb9f3302a65cc21a77c28412c0255e4ea9108f0b73b32b4f2a68fda21f829e5" address="unix:///run/containerd/s/dfcbacdb988e9316c9e30e9683ce84b990a0297eed76fbc102934ce143d20b7e" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:38.589881 systemd[1]: Started cri-containerd-bcb9f3302a65cc21a77c28412c0255e4ea9108f0b73b32b4f2a68fda21f829e5.scope - libcontainer container bcb9f3302a65cc21a77c28412c0255e4ea9108f0b73b32b4f2a68fda21f829e5. Jun 20 19:14:38.612692 containerd[1732]: time="2025-06-20T19:14:38.612654752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5nz4v,Uid:96fbf73f-7953-41d3-bb56-b24686fdd3bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcb9f3302a65cc21a77c28412c0255e4ea9108f0b73b32b4f2a68fda21f829e5\"" Jun 20 19:14:38.615721 containerd[1732]: time="2025-06-20T19:14:38.615660676Z" level=info msg="CreateContainer within sandbox \"bcb9f3302a65cc21a77c28412c0255e4ea9108f0b73b32b4f2a68fda21f829e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:14:38.634815 containerd[1732]: time="2025-06-20T19:14:38.634772409Z" level=info msg="Container 866f53dc10e768a6b40ba6b4db2f2413b2afc22d84259236336fd30767b2a29b: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:38.650535 containerd[1732]: time="2025-06-20T19:14:38.650499668Z" level=info msg="CreateContainer within sandbox \"bcb9f3302a65cc21a77c28412c0255e4ea9108f0b73b32b4f2a68fda21f829e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"866f53dc10e768a6b40ba6b4db2f2413b2afc22d84259236336fd30767b2a29b\"" Jun 20 19:14:38.651252 containerd[1732]: time="2025-06-20T19:14:38.651204859Z" level=info msg="StartContainer for \"866f53dc10e768a6b40ba6b4db2f2413b2afc22d84259236336fd30767b2a29b\"" Jun 20 19:14:38.652962 containerd[1732]: time="2025-06-20T19:14:38.652917085Z" level=info msg="connecting to shim 866f53dc10e768a6b40ba6b4db2f2413b2afc22d84259236336fd30767b2a29b" address="unix:///run/containerd/s/dfcbacdb988e9316c9e30e9683ce84b990a0297eed76fbc102934ce143d20b7e" protocol=ttrpc version=3 Jun 20 19:14:38.671881 systemd[1]: Started cri-containerd-866f53dc10e768a6b40ba6b4db2f2413b2afc22d84259236336fd30767b2a29b.scope - libcontainer container 866f53dc10e768a6b40ba6b4db2f2413b2afc22d84259236336fd30767b2a29b. Jun 20 19:14:38.707877 containerd[1732]: time="2025-06-20T19:14:38.707769804Z" level=info msg="StartContainer for \"866f53dc10e768a6b40ba6b4db2f2413b2afc22d84259236336fd30767b2a29b\" returns successfully" Jun 20 19:14:38.725746 containerd[1732]: time="2025-06-20T19:14:38.725690908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-t44s6,Uid:6426b4f6-2140-4349-b4c8-e0e7a23a6e77,Namespace:tigera-operator,Attempt:0,}" Jun 20 19:14:38.769023 containerd[1732]: time="2025-06-20T19:14:38.768921495Z" level=info msg="connecting to shim 9cb8109786435fd8fe8aa97e88cdea7e461d4dde902203b5a67edb76a69a07bc" address="unix:///run/containerd/s/b8fe39a11b0abac05bc99184f52cd473b069161f5c07660ea24681fd9692a2b5" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:38.793009 systemd[1]: Started cri-containerd-9cb8109786435fd8fe8aa97e88cdea7e461d4dde902203b5a67edb76a69a07bc.scope - libcontainer container 9cb8109786435fd8fe8aa97e88cdea7e461d4dde902203b5a67edb76a69a07bc. Jun 20 19:14:38.845886 containerd[1732]: time="2025-06-20T19:14:38.845844264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-t44s6,Uid:6426b4f6-2140-4349-b4c8-e0e7a23a6e77,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9cb8109786435fd8fe8aa97e88cdea7e461d4dde902203b5a67edb76a69a07bc\"" Jun 20 19:14:38.847905 containerd[1732]: time="2025-06-20T19:14:38.847772926Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 19:14:39.102221 kubelet[3133]: I0620 19:14:39.101982 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5nz4v" podStartSLOduration=1.101964046 podStartE2EDuration="1.101964046s" podCreationTimestamp="2025-06-20 19:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:14:39.086369125 +0000 UTC m=+6.210508935" watchObservedRunningTime="2025-06-20 19:14:39.101964046 +0000 UTC m=+6.226103861" Jun 20 19:14:40.281917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605557768.mount: Deactivated successfully. Jun 20 19:14:40.820923 containerd[1732]: time="2025-06-20T19:14:40.820872987Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:40.823680 containerd[1732]: time="2025-06-20T19:14:40.823638226Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 20 19:14:40.827446 containerd[1732]: time="2025-06-20T19:14:40.827388233Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:40.835299 containerd[1732]: time="2025-06-20T19:14:40.835241373Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:40.835791 containerd[1732]: time="2025-06-20T19:14:40.835630587Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 1.987812746s" Jun 20 19:14:40.835791 containerd[1732]: time="2025-06-20T19:14:40.835663096Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 20 19:14:40.838049 containerd[1732]: time="2025-06-20T19:14:40.838015486Z" level=info msg="CreateContainer within sandbox \"9cb8109786435fd8fe8aa97e88cdea7e461d4dde902203b5a67edb76a69a07bc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 19:14:40.855904 containerd[1732]: time="2025-06-20T19:14:40.855811272Z" level=info msg="Container 8a5651f212f42c2ecdf48d660a176d51cd344c836de853bdb92c616a595d5dba: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:40.872898 containerd[1732]: time="2025-06-20T19:14:40.872861950Z" level=info msg="CreateContainer within sandbox \"9cb8109786435fd8fe8aa97e88cdea7e461d4dde902203b5a67edb76a69a07bc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8a5651f212f42c2ecdf48d660a176d51cd344c836de853bdb92c616a595d5dba\"" Jun 20 19:14:40.873307 containerd[1732]: time="2025-06-20T19:14:40.873287475Z" level=info msg="StartContainer for \"8a5651f212f42c2ecdf48d660a176d51cd344c836de853bdb92c616a595d5dba\"" Jun 20 19:14:40.874375 containerd[1732]: time="2025-06-20T19:14:40.874347764Z" level=info msg="connecting to shim 8a5651f212f42c2ecdf48d660a176d51cd344c836de853bdb92c616a595d5dba" address="unix:///run/containerd/s/b8fe39a11b0abac05bc99184f52cd473b069161f5c07660ea24681fd9692a2b5" protocol=ttrpc version=3 Jun 20 19:14:40.894848 systemd[1]: Started cri-containerd-8a5651f212f42c2ecdf48d660a176d51cd344c836de853bdb92c616a595d5dba.scope - libcontainer container 8a5651f212f42c2ecdf48d660a176d51cd344c836de853bdb92c616a595d5dba. Jun 20 19:14:40.922725 containerd[1732]: time="2025-06-20T19:14:40.922611883Z" level=info msg="StartContainer for \"8a5651f212f42c2ecdf48d660a176d51cd344c836de853bdb92c616a595d5dba\" returns successfully" Jun 20 19:14:41.082340 kubelet[3133]: I0620 19:14:41.082181 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-t44s6" podStartSLOduration=1.092444787 podStartE2EDuration="3.082155248s" podCreationTimestamp="2025-06-20 19:14:38 +0000 UTC" firstStartedPulling="2025-06-20 19:14:38.846883268 +0000 UTC m=+5.971023075" lastFinishedPulling="2025-06-20 19:14:40.836593716 +0000 UTC m=+7.960733536" observedRunningTime="2025-06-20 19:14:41.082057488 +0000 UTC m=+8.206197317" watchObservedRunningTime="2025-06-20 19:14:41.082155248 +0000 UTC m=+8.206295069" Jun 20 19:14:46.674203 sudo[2157]: pam_unix(sudo:session): session closed for user root Jun 20 19:14:46.769806 sshd[2156]: Connection closed by 10.200.16.10 port 54748 Jun 20 19:14:46.769778 sshd-session[2154]: pam_unix(sshd:session): session closed for user core Jun 20 19:14:46.776283 systemd[1]: sshd@6-10.200.4.4:22-10.200.16.10:54748.service: Deactivated successfully. Jun 20 19:14:46.780798 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:14:46.781353 systemd[1]: session-9.scope: Consumed 3.506s CPU time, 226.4M memory peak. Jun 20 19:14:46.787158 systemd-logind[1709]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:14:46.788167 systemd-logind[1709]: Removed session 9. Jun 20 19:14:50.245023 systemd[1]: Created slice kubepods-besteffort-podc4cc9c93_6b13_4efa_aa3b_5c988956fc9c.slice - libcontainer container kubepods-besteffort-podc4cc9c93_6b13_4efa_aa3b_5c988956fc9c.slice. Jun 20 19:14:50.311867 kubelet[3133]: I0620 19:14:50.311825 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lndfg\" (UniqueName: \"kubernetes.io/projected/c4cc9c93-6b13-4efa-aa3b-5c988956fc9c-kube-api-access-lndfg\") pod \"calico-typha-f9544856c-hmskt\" (UID: \"c4cc9c93-6b13-4efa-aa3b-5c988956fc9c\") " pod="calico-system/calico-typha-f9544856c-hmskt" Jun 20 19:14:50.311867 kubelet[3133]: I0620 19:14:50.311864 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4cc9c93-6b13-4efa-aa3b-5c988956fc9c-tigera-ca-bundle\") pod \"calico-typha-f9544856c-hmskt\" (UID: \"c4cc9c93-6b13-4efa-aa3b-5c988956fc9c\") " pod="calico-system/calico-typha-f9544856c-hmskt" Jun 20 19:14:50.312233 kubelet[3133]: I0620 19:14:50.311880 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c4cc9c93-6b13-4efa-aa3b-5c988956fc9c-typha-certs\") pod \"calico-typha-f9544856c-hmskt\" (UID: \"c4cc9c93-6b13-4efa-aa3b-5c988956fc9c\") " pod="calico-system/calico-typha-f9544856c-hmskt" Jun 20 19:14:50.556568 containerd[1732]: time="2025-06-20T19:14:50.556520944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f9544856c-hmskt,Uid:c4cc9c93-6b13-4efa-aa3b-5c988956fc9c,Namespace:calico-system,Attempt:0,}" Jun 20 19:14:50.616408 containerd[1732]: time="2025-06-20T19:14:50.616353171Z" level=info msg="connecting to shim 8a1e947162fabc77e50c9e67b60a78d58cfaf3a08761badc4a272a736aec4a0a" address="unix:///run/containerd/s/4567e6d7c1299767f905f40cfb6f8107914b65d49e77e74b1c5efaa01edc1241" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:50.658055 systemd[1]: Started cri-containerd-8a1e947162fabc77e50c9e67b60a78d58cfaf3a08761badc4a272a736aec4a0a.scope - libcontainer container 8a1e947162fabc77e50c9e67b60a78d58cfaf3a08761badc4a272a736aec4a0a. Jun 20 19:14:50.671080 kubelet[3133]: W0620 19:14:50.670216 3133 reflector.go:569] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4344.1.0-a-69d2cbc98d" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344.1.0-a-69d2cbc98d' and this object Jun 20 19:14:50.671080 kubelet[3133]: E0620 19:14:50.670261 3133 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:ci-4344.1.0-a-69d2cbc98d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344.1.0-a-69d2cbc98d' and this object" logger="UnhandledError" Jun 20 19:14:50.671080 kubelet[3133]: I0620 19:14:50.670319 3133 status_manager.go:890] "Failed to get status for pod" podUID="85be6b71-3ba2-42df-b08b-6965626b057b" pod="calico-system/calico-node-255jq" err="pods \"calico-node-255jq\" is forbidden: User \"system:node:ci-4344.1.0-a-69d2cbc98d\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344.1.0-a-69d2cbc98d' and this object" Jun 20 19:14:50.676484 systemd[1]: Created slice kubepods-besteffort-pod85be6b71_3ba2_42df_b08b_6965626b057b.slice - libcontainer container kubepods-besteffort-pod85be6b71_3ba2_42df_b08b_6965626b057b.slice. Jun 20 19:14:50.714766 kubelet[3133]: I0620 19:14:50.714691 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85be6b71-3ba2-42df-b08b-6965626b057b-lib-modules\") pod \"calico-node-255jq\" (UID: \"85be6b71-3ba2-42df-b08b-6965626b057b\") " pod="calico-system/calico-node-255jq" Jun 20 19:14:50.714976 kubelet[3133]: I0620 19:14:50.714815 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85be6b71-3ba2-42df-b08b-6965626b057b-tigera-ca-bundle\") pod \"calico-node-255jq\" (UID: \"85be6b71-3ba2-42df-b08b-6965626b057b\") " pod="calico-system/calico-node-255jq" Jun 20 19:14:50.714976 kubelet[3133]: I0620 19:14:50.714836 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl85v\" (UniqueName: \"kubernetes.io/projected/85be6b71-3ba2-42df-b08b-6965626b057b-kube-api-access-jl85v\") pod \"calico-node-255jq\" (UID: \"85be6b71-3ba2-42df-b08b-6965626b057b\") " pod="calico-system/calico-node-255jq" Jun 20 19:14:50.715901 kubelet[3133]: I0620 19:14:50.715091 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/85be6b71-3ba2-42df-b08b-6965626b057b-cni-log-dir\") pod \"calico-node-255jq\" (UID: \"85be6b71-3ba2-42df-b08b-6965626b057b\") " pod="calico-system/calico-node-255jq" Jun 20 19:14:50.715901 kubelet[3133]: I0620 19:14:50.715134 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/85be6b71-3ba2-42df-b08b-6965626b057b-var-run-calico\") pod \"calico-node-255jq\" (UID: \"85be6b71-3ba2-42df-b08b-6965626b057b\") " pod="calico-system/calico-node-255jq" Jun 20 19:14:50.715901 kubelet[3133]: I0620 19:14:50.715157 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/85be6b71-3ba2-42df-b08b-6965626b057b-policysync\") pod \"calico-node-255jq\" (UID: \"85be6b71-3ba2-42df-b08b-6965626b057b\") " pod="calico-system/calico-node-255jq" Jun 20 19:14:50.716123 kubelet[3133]: I0620 19:14:50.715172 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/85be6b71-3ba2-42df-b08b-6965626b057b-var-lib-calico\") pod \"calico-node-255jq\" (UID: \"85be6b71-3ba2-42df-b08b-6965626b057b\") " pod="calico-system/calico-node-255jq" Jun 20 19:14:50.716295 kubelet[3133]: I0620 19:14:50.716035 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/85be6b71-3ba2-42df-b08b-6965626b057b-flexvol-driver-host\") pod \"calico-node-255jq\" (UID: \"85be6b71-3ba2-42df-b08b-6965626b057b\") " pod="calico-system/calico-node-255jq" Jun 20 19:14:50.716295 kubelet[3133]: I0620 19:14:50.716182 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85be6b71-3ba2-42df-b08b-6965626b057b-xtables-lock\") pod \"calico-node-255jq\" (UID: \"85be6b71-3ba2-42df-b08b-6965626b057b\") " pod="calico-system/calico-node-255jq" Jun 20 19:14:50.716295 kubelet[3133]: I0620 19:14:50.716202 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/85be6b71-3ba2-42df-b08b-6965626b057b-cni-bin-dir\") pod \"calico-node-255jq\" (UID: \"85be6b71-3ba2-42df-b08b-6965626b057b\") " pod="calico-system/calico-node-255jq" Jun 20 19:14:50.716295 kubelet[3133]: I0620 19:14:50.716246 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/85be6b71-3ba2-42df-b08b-6965626b057b-cni-net-dir\") pod \"calico-node-255jq\" (UID: \"85be6b71-3ba2-42df-b08b-6965626b057b\") " pod="calico-system/calico-node-255jq" Jun 20 19:14:50.716295 kubelet[3133]: I0620 19:14:50.716266 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/85be6b71-3ba2-42df-b08b-6965626b057b-node-certs\") pod \"calico-node-255jq\" (UID: \"85be6b71-3ba2-42df-b08b-6965626b057b\") " pod="calico-system/calico-node-255jq" Jun 20 19:14:50.787029 containerd[1732]: time="2025-06-20T19:14:50.786991245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f9544856c-hmskt,Uid:c4cc9c93-6b13-4efa-aa3b-5c988956fc9c,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a1e947162fabc77e50c9e67b60a78d58cfaf3a08761badc4a272a736aec4a0a\"" Jun 20 19:14:50.788272 containerd[1732]: time="2025-06-20T19:14:50.788245935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 19:14:50.830848 kubelet[3133]: E0620 19:14:50.830757 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:50.830848 kubelet[3133]: W0620 19:14:50.830789 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:50.830848 kubelet[3133]: E0620 19:14:50.830825 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:50.925682 kubelet[3133]: E0620 19:14:50.925453 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcgqf" podUID="7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0" Jun 20 19:14:51.003495 kubelet[3133]: E0620 19:14:51.003232 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.003495 kubelet[3133]: W0620 19:14:51.003292 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.003495 kubelet[3133]: E0620 19:14:51.003319 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.004595 kubelet[3133]: E0620 19:14:51.004504 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.004595 kubelet[3133]: W0620 19:14:51.004526 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.004595 kubelet[3133]: E0620 19:14:51.004547 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.005191 kubelet[3133]: E0620 19:14:51.005032 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.005191 kubelet[3133]: W0620 19:14:51.005047 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.005191 kubelet[3133]: E0620 19:14:51.005063 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.005629 kubelet[3133]: E0620 19:14:51.005617 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.005750 kubelet[3133]: W0620 19:14:51.005738 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.005834 kubelet[3133]: E0620 19:14:51.005810 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.006405 kubelet[3133]: E0620 19:14:51.006390 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.006525 kubelet[3133]: W0620 19:14:51.006475 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.006525 kubelet[3133]: E0620 19:14:51.006491 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.007095 kubelet[3133]: E0620 19:14:51.006768 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.007095 kubelet[3133]: W0620 19:14:51.006779 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.007095 kubelet[3133]: E0620 19:14:51.006790 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.007411 kubelet[3133]: E0620 19:14:51.007349 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.007411 kubelet[3133]: W0620 19:14:51.007362 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.007411 kubelet[3133]: E0620 19:14:51.007375 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.007731 kubelet[3133]: E0620 19:14:51.007677 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.007731 kubelet[3133]: W0620 19:14:51.007687 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.007943 kubelet[3133]: E0620 19:14:51.007884 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.008359 kubelet[3133]: E0620 19:14:51.008347 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.008472 kubelet[3133]: W0620 19:14:51.008422 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.008472 kubelet[3133]: E0620 19:14:51.008439 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.008714 kubelet[3133]: E0620 19:14:51.008650 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.008714 kubelet[3133]: W0620 19:14:51.008661 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.008714 kubelet[3133]: E0620 19:14:51.008670 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.009112 kubelet[3133]: E0620 19:14:51.009099 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.009361 kubelet[3133]: W0620 19:14:51.009256 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.009361 kubelet[3133]: E0620 19:14:51.009274 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.009789 kubelet[3133]: E0620 19:14:51.009776 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.009922 kubelet[3133]: W0620 19:14:51.009854 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.009922 kubelet[3133]: E0620 19:14:51.009869 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.010208 kubelet[3133]: E0620 19:14:51.010172 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.010208 kubelet[3133]: W0620 19:14:51.010187 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.010208 kubelet[3133]: E0620 19:14:51.010198 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.010327 kubelet[3133]: E0620 19:14:51.010315 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.010327 kubelet[3133]: W0620 19:14:51.010325 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.010391 kubelet[3133]: E0620 19:14:51.010333 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.010431 kubelet[3133]: E0620 19:14:51.010423 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.010453 kubelet[3133]: W0620 19:14:51.010430 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.010480 kubelet[3133]: E0620 19:14:51.010461 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.010579 kubelet[3133]: E0620 19:14:51.010557 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.010579 kubelet[3133]: W0620 19:14:51.010570 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.010640 kubelet[3133]: E0620 19:14:51.010580 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.010736 kubelet[3133]: E0620 19:14:51.010724 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.010736 kubelet[3133]: W0620 19:14:51.010733 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.010801 kubelet[3133]: E0620 19:14:51.010744 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.010852 kubelet[3133]: E0620 19:14:51.010843 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.010877 kubelet[3133]: W0620 19:14:51.010852 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.010877 kubelet[3133]: E0620 19:14:51.010860 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.010960 kubelet[3133]: E0620 19:14:51.010951 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.010960 kubelet[3133]: W0620 19:14:51.010958 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.011011 kubelet[3133]: E0620 19:14:51.010965 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.011101 kubelet[3133]: E0620 19:14:51.011089 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.011101 kubelet[3133]: W0620 19:14:51.011098 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.011153 kubelet[3133]: E0620 19:14:51.011107 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.018754 kubelet[3133]: E0620 19:14:51.018714 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.018754 kubelet[3133]: W0620 19:14:51.018729 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.018754 kubelet[3133]: E0620 19:14:51.018743 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.018874 kubelet[3133]: I0620 19:14:51.018772 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0-varrun\") pod \"csi-node-driver-tcgqf\" (UID: \"7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0\") " pod="calico-system/csi-node-driver-tcgqf" Jun 20 19:14:51.019095 kubelet[3133]: E0620 19:14:51.018974 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.019095 kubelet[3133]: W0620 19:14:51.018986 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.019095 kubelet[3133]: E0620 19:14:51.018997 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.019095 kubelet[3133]: I0620 19:14:51.019016 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mlnl\" (UniqueName: \"kubernetes.io/projected/7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0-kube-api-access-5mlnl\") pod \"csi-node-driver-tcgqf\" (UID: \"7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0\") " pod="calico-system/csi-node-driver-tcgqf" Jun 20 19:14:51.019233 kubelet[3133]: E0620 19:14:51.019224 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.019268 kubelet[3133]: W0620 19:14:51.019261 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.019383 kubelet[3133]: E0620 19:14:51.019301 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.019383 kubelet[3133]: I0620 19:14:51.019318 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0-registration-dir\") pod \"csi-node-driver-tcgqf\" (UID: \"7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0\") " pod="calico-system/csi-node-driver-tcgqf" Jun 20 19:14:51.019492 kubelet[3133]: E0620 19:14:51.019485 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.019527 kubelet[3133]: W0620 19:14:51.019521 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.019606 kubelet[3133]: E0620 19:14:51.019559 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.019606 kubelet[3133]: I0620 19:14:51.019577 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0-socket-dir\") pod \"csi-node-driver-tcgqf\" (UID: \"7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0\") " pod="calico-system/csi-node-driver-tcgqf" Jun 20 19:14:51.019819 kubelet[3133]: E0620 19:14:51.019762 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.019819 kubelet[3133]: W0620 19:14:51.019772 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.019819 kubelet[3133]: E0620 19:14:51.019789 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.020111 kubelet[3133]: E0620 19:14:51.020022 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.020111 kubelet[3133]: W0620 19:14:51.020032 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.020111 kubelet[3133]: E0620 19:14:51.020042 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.020243 kubelet[3133]: E0620 19:14:51.020237 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.020281 kubelet[3133]: W0620 19:14:51.020274 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.020328 kubelet[3133]: E0620 19:14:51.020321 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.020718 kubelet[3133]: I0620 19:14:51.020469 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0-kubelet-dir\") pod \"csi-node-driver-tcgqf\" (UID: \"7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0\") " pod="calico-system/csi-node-driver-tcgqf" Jun 20 19:14:51.020851 kubelet[3133]: E0620 19:14:51.020840 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.020901 kubelet[3133]: W0620 19:14:51.020892 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.021164 kubelet[3133]: E0620 19:14:51.021151 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.021358 kubelet[3133]: E0620 19:14:51.021349 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.021412 kubelet[3133]: W0620 19:14:51.021404 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.021454 kubelet[3133]: E0620 19:14:51.021446 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.021751 kubelet[3133]: E0620 19:14:51.021725 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.021751 kubelet[3133]: W0620 19:14:51.021737 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.021929 kubelet[3133]: E0620 19:14:51.021847 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.022045 kubelet[3133]: E0620 19:14:51.022025 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.022045 kubelet[3133]: W0620 19:14:51.022034 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.022190 kubelet[3133]: E0620 19:14:51.022169 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.022399 kubelet[3133]: E0620 19:14:51.022302 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.022399 kubelet[3133]: W0620 19:14:51.022311 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.022563 kubelet[3133]: E0620 19:14:51.022541 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.022688 kubelet[3133]: E0620 19:14:51.022680 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.022768 kubelet[3133]: W0620 19:14:51.022737 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.022832 kubelet[3133]: E0620 19:14:51.022802 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.022970 kubelet[3133]: E0620 19:14:51.022963 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.023032 kubelet[3133]: W0620 19:14:51.023010 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.023032 kubelet[3133]: E0620 19:14:51.023021 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.023274 kubelet[3133]: E0620 19:14:51.023238 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.023274 kubelet[3133]: W0620 19:14:51.023249 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.023274 kubelet[3133]: E0620 19:14:51.023258 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.121459 kubelet[3133]: E0620 19:14:51.121347 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.121459 kubelet[3133]: W0620 19:14:51.121371 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.121459 kubelet[3133]: E0620 19:14:51.121395 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.122165 kubelet[3133]: E0620 19:14:51.121551 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.122165 kubelet[3133]: W0620 19:14:51.121557 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.122165 kubelet[3133]: E0620 19:14:51.121564 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.122165 kubelet[3133]: E0620 19:14:51.121825 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.122165 kubelet[3133]: W0620 19:14:51.121832 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.122165 kubelet[3133]: E0620 19:14:51.121847 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.122165 kubelet[3133]: E0620 19:14:51.121946 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.122165 kubelet[3133]: W0620 19:14:51.121951 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.122165 kubelet[3133]: E0620 19:14:51.121958 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.122165 kubelet[3133]: E0620 19:14:51.122048 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.123659 kubelet[3133]: W0620 19:14:51.122053 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.123659 kubelet[3133]: E0620 19:14:51.122066 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.123659 kubelet[3133]: E0620 19:14:51.122195 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.123659 kubelet[3133]: W0620 19:14:51.122201 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.123659 kubelet[3133]: E0620 19:14:51.122213 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.123659 kubelet[3133]: E0620 19:14:51.122354 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.123659 kubelet[3133]: W0620 19:14:51.122360 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.123659 kubelet[3133]: E0620 19:14:51.122373 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.123659 kubelet[3133]: E0620 19:14:51.122491 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.123659 kubelet[3133]: W0620 19:14:51.122496 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.125184 kubelet[3133]: E0620 19:14:51.122508 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.125184 kubelet[3133]: E0620 19:14:51.122602 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.125184 kubelet[3133]: W0620 19:14:51.122608 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.125184 kubelet[3133]: E0620 19:14:51.122714 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.125184 kubelet[3133]: E0620 19:14:51.122867 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.125184 kubelet[3133]: W0620 19:14:51.122874 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.125184 kubelet[3133]: E0620 19:14:51.122901 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.125184 kubelet[3133]: E0620 19:14:51.123034 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.125184 kubelet[3133]: W0620 19:14:51.123080 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.125184 kubelet[3133]: E0620 19:14:51.123097 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.126278 kubelet[3133]: E0620 19:14:51.123248 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.126278 kubelet[3133]: W0620 19:14:51.123254 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.126278 kubelet[3133]: E0620 19:14:51.123266 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.126278 kubelet[3133]: E0620 19:14:51.123418 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.126278 kubelet[3133]: W0620 19:14:51.123771 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.126278 kubelet[3133]: E0620 19:14:51.123786 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.126278 kubelet[3133]: E0620 19:14:51.123936 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.126278 kubelet[3133]: W0620 19:14:51.123941 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.126278 kubelet[3133]: E0620 19:14:51.123947 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.126278 kubelet[3133]: E0620 19:14:51.124051 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.126519 kubelet[3133]: W0620 19:14:51.124055 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.126519 kubelet[3133]: E0620 19:14:51.124061 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.126519 kubelet[3133]: E0620 19:14:51.124134 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.126519 kubelet[3133]: W0620 19:14:51.124142 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.126519 kubelet[3133]: E0620 19:14:51.124148 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.126519 kubelet[3133]: E0620 19:14:51.124225 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.126519 kubelet[3133]: W0620 19:14:51.124229 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.126519 kubelet[3133]: E0620 19:14:51.124235 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.126519 kubelet[3133]: E0620 19:14:51.125142 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.126519 kubelet[3133]: W0620 19:14:51.125173 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.126936 kubelet[3133]: E0620 19:14:51.125186 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.126936 kubelet[3133]: E0620 19:14:51.125335 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.126936 kubelet[3133]: W0620 19:14:51.125340 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.126936 kubelet[3133]: E0620 19:14:51.125348 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.126936 kubelet[3133]: E0620 19:14:51.125440 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.126936 kubelet[3133]: W0620 19:14:51.125447 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.126936 kubelet[3133]: E0620 19:14:51.125454 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.126936 kubelet[3133]: E0620 19:14:51.125548 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.126936 kubelet[3133]: W0620 19:14:51.125553 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.126936 kubelet[3133]: E0620 19:14:51.125559 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.127167 kubelet[3133]: E0620 19:14:51.126019 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.127167 kubelet[3133]: W0620 19:14:51.126029 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.127167 kubelet[3133]: E0620 19:14:51.126039 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.127167 kubelet[3133]: E0620 19:14:51.126178 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.127167 kubelet[3133]: W0620 19:14:51.126185 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.127167 kubelet[3133]: E0620 19:14:51.126193 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.127167 kubelet[3133]: E0620 19:14:51.126317 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.127167 kubelet[3133]: W0620 19:14:51.126323 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.127167 kubelet[3133]: E0620 19:14:51.126330 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.127167 kubelet[3133]: E0620 19:14:51.126455 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.127390 kubelet[3133]: W0620 19:14:51.126460 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.127390 kubelet[3133]: E0620 19:14:51.126467 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.133823 kubelet[3133]: E0620 19:14:51.133802 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.133823 kubelet[3133]: W0620 19:14:51.133818 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.133923 kubelet[3133]: E0620 19:14:51.133832 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.818322 kubelet[3133]: E0620 19:14:51.818279 3133 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jun 20 19:14:51.818777 kubelet[3133]: E0620 19:14:51.818410 3133 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85be6b71-3ba2-42df-b08b-6965626b057b-node-certs podName:85be6b71-3ba2-42df-b08b-6965626b057b nodeName:}" failed. No retries permitted until 2025-06-20 19:14:52.318385905 +0000 UTC m=+19.442525715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/85be6b71-3ba2-42df-b08b-6965626b057b-node-certs") pod "calico-node-255jq" (UID: "85be6b71-3ba2-42df-b08b-6965626b057b") : failed to sync secret cache: timed out waiting for the condition Jun 20 19:14:51.828934 kubelet[3133]: E0620 19:14:51.828905 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.828934 kubelet[3133]: W0620 19:14:51.828929 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.829073 kubelet[3133]: E0620 19:14:51.828951 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:51.929432 kubelet[3133]: E0620 19:14:51.929399 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:51.929432 kubelet[3133]: W0620 19:14:51.929423 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:51.929634 kubelet[3133]: E0620 19:14:51.929445 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.030294 kubelet[3133]: E0620 19:14:52.030262 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.030294 kubelet[3133]: W0620 19:14:52.030284 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.030461 kubelet[3133]: E0620 19:14:52.030306 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.131839 kubelet[3133]: E0620 19:14:52.131718 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.131839 kubelet[3133]: W0620 19:14:52.131745 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.131839 kubelet[3133]: E0620 19:14:52.131767 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.211156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3693717427.mount: Deactivated successfully. Jun 20 19:14:52.233057 kubelet[3133]: E0620 19:14:52.233029 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.233057 kubelet[3133]: W0620 19:14:52.233050 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.233200 kubelet[3133]: E0620 19:14:52.233071 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.334098 kubelet[3133]: E0620 19:14:52.334061 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.334098 kubelet[3133]: W0620 19:14:52.334086 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.334365 kubelet[3133]: E0620 19:14:52.334118 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.334365 kubelet[3133]: E0620 19:14:52.334363 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.334427 kubelet[3133]: W0620 19:14:52.334371 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.334427 kubelet[3133]: E0620 19:14:52.334411 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.334624 kubelet[3133]: E0620 19:14:52.334613 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.334678 kubelet[3133]: W0620 19:14:52.334666 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.334727 kubelet[3133]: E0620 19:14:52.334679 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.334908 kubelet[3133]: E0620 19:14:52.334884 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.334908 kubelet[3133]: W0620 19:14:52.334906 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.334960 kubelet[3133]: E0620 19:14:52.334915 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.335201 kubelet[3133]: E0620 19:14:52.335150 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.335201 kubelet[3133]: W0620 19:14:52.335199 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.335256 kubelet[3133]: E0620 19:14:52.335209 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.343729 kubelet[3133]: E0620 19:14:52.343651 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.343729 kubelet[3133]: W0620 19:14:52.343672 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.343859 kubelet[3133]: E0620 19:14:52.343797 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.483295 containerd[1732]: time="2025-06-20T19:14:52.483168267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-255jq,Uid:85be6b71-3ba2-42df-b08b-6965626b057b,Namespace:calico-system,Attempt:0,}" Jun 20 19:14:52.531015 containerd[1732]: time="2025-06-20T19:14:52.530951373Z" level=info msg="connecting to shim 908304e01d69c1f52fd66c35463f5a5b8c0934d8ebdaa077fc8878ab543e4d09" address="unix:///run/containerd/s/f207414efa28409e68d914f1a158d382bb1b55196c3ba5adf51ed133eef5a0a4" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:52.554880 systemd[1]: Started cri-containerd-908304e01d69c1f52fd66c35463f5a5b8c0934d8ebdaa077fc8878ab543e4d09.scope - libcontainer container 908304e01d69c1f52fd66c35463f5a5b8c0934d8ebdaa077fc8878ab543e4d09. Jun 20 19:14:52.591322 containerd[1732]: time="2025-06-20T19:14:52.591289027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-255jq,Uid:85be6b71-3ba2-42df-b08b-6965626b057b,Namespace:calico-system,Attempt:0,} returns sandbox id \"908304e01d69c1f52fd66c35463f5a5b8c0934d8ebdaa077fc8878ab543e4d09\"" Jun 20 19:14:53.015845 kubelet[3133]: E0620 19:14:53.015795 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcgqf" podUID="7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0" Jun 20 19:14:53.310847 containerd[1732]: time="2025-06-20T19:14:53.310799233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:53.313070 containerd[1732]: time="2025-06-20T19:14:53.313035960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=35227888" Jun 20 19:14:53.317264 containerd[1732]: time="2025-06-20T19:14:53.317177505Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:53.325830 containerd[1732]: time="2025-06-20T19:14:53.325755794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:53.326347 containerd[1732]: time="2025-06-20T19:14:53.326189489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 2.53791192s" Jun 20 19:14:53.326347 containerd[1732]: time="2025-06-20T19:14:53.326221519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 20 19:14:53.327127 containerd[1732]: time="2025-06-20T19:14:53.327102761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 19:14:53.339455 containerd[1732]: time="2025-06-20T19:14:53.338966415Z" level=info msg="CreateContainer within sandbox \"8a1e947162fabc77e50c9e67b60a78d58cfaf3a08761badc4a272a736aec4a0a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 19:14:53.367153 containerd[1732]: time="2025-06-20T19:14:53.367099014Z" level=info msg="Container b81aa8a820f1bb9dcbcec9e04779a37770cf38e40f6e861907ed9cb72f415318: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:53.390746 containerd[1732]: time="2025-06-20T19:14:53.390706812Z" level=info msg="CreateContainer within sandbox \"8a1e947162fabc77e50c9e67b60a78d58cfaf3a08761badc4a272a736aec4a0a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b81aa8a820f1bb9dcbcec9e04779a37770cf38e40f6e861907ed9cb72f415318\"" Jun 20 19:14:53.391453 containerd[1732]: time="2025-06-20T19:14:53.391425465Z" level=info msg="StartContainer for \"b81aa8a820f1bb9dcbcec9e04779a37770cf38e40f6e861907ed9cb72f415318\"" Jun 20 19:14:53.392626 containerd[1732]: time="2025-06-20T19:14:53.392587414Z" level=info msg="connecting to shim b81aa8a820f1bb9dcbcec9e04779a37770cf38e40f6e861907ed9cb72f415318" address="unix:///run/containerd/s/4567e6d7c1299767f905f40cfb6f8107914b65d49e77e74b1c5efaa01edc1241" protocol=ttrpc version=3 Jun 20 19:14:53.409892 systemd[1]: Started cri-containerd-b81aa8a820f1bb9dcbcec9e04779a37770cf38e40f6e861907ed9cb72f415318.scope - libcontainer container b81aa8a820f1bb9dcbcec9e04779a37770cf38e40f6e861907ed9cb72f415318. Jun 20 19:14:53.456957 containerd[1732]: time="2025-06-20T19:14:53.456915559Z" level=info msg="StartContainer for \"b81aa8a820f1bb9dcbcec9e04779a37770cf38e40f6e861907ed9cb72f415318\" returns successfully" Jun 20 19:14:54.129498 kubelet[3133]: E0620 19:14:54.129463 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.129498 kubelet[3133]: W0620 19:14:54.129489 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.130061 kubelet[3133]: E0620 19:14:54.129514 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.130061 kubelet[3133]: E0620 19:14:54.129640 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.130061 kubelet[3133]: W0620 19:14:54.129646 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.130061 kubelet[3133]: E0620 19:14:54.129654 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.130061 kubelet[3133]: E0620 19:14:54.129762 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.130061 kubelet[3133]: W0620 19:14:54.129768 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.130061 kubelet[3133]: E0620 19:14:54.129775 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.130061 kubelet[3133]: E0620 19:14:54.129907 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.130061 kubelet[3133]: W0620 19:14:54.129913 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.130061 kubelet[3133]: E0620 19:14:54.129920 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.130429 kubelet[3133]: E0620 19:14:54.130014 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.130429 kubelet[3133]: W0620 19:14:54.130019 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.130429 kubelet[3133]: E0620 19:14:54.130026 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.130429 kubelet[3133]: E0620 19:14:54.130112 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.130429 kubelet[3133]: W0620 19:14:54.130117 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.130429 kubelet[3133]: E0620 19:14:54.130126 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.130429 kubelet[3133]: E0620 19:14:54.130218 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.130429 kubelet[3133]: W0620 19:14:54.130223 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.130429 kubelet[3133]: E0620 19:14:54.130230 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.130429 kubelet[3133]: E0620 19:14:54.130329 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.130808 kubelet[3133]: W0620 19:14:54.130334 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.130808 kubelet[3133]: E0620 19:14:54.130341 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.130808 kubelet[3133]: E0620 19:14:54.130431 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.130808 kubelet[3133]: W0620 19:14:54.130436 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.130808 kubelet[3133]: E0620 19:14:54.130443 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.130808 kubelet[3133]: E0620 19:14:54.130525 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.130808 kubelet[3133]: W0620 19:14:54.130529 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.130808 kubelet[3133]: E0620 19:14:54.130535 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.130808 kubelet[3133]: E0620 19:14:54.130621 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.130808 kubelet[3133]: W0620 19:14:54.130626 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.131111 kubelet[3133]: E0620 19:14:54.130632 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.131111 kubelet[3133]: E0620 19:14:54.130731 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.131111 kubelet[3133]: W0620 19:14:54.130736 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.131111 kubelet[3133]: E0620 19:14:54.130743 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.131111 kubelet[3133]: E0620 19:14:54.130839 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.131111 kubelet[3133]: W0620 19:14:54.130845 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.131111 kubelet[3133]: E0620 19:14:54.130850 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.131111 kubelet[3133]: E0620 19:14:54.130932 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.131111 kubelet[3133]: W0620 19:14:54.130937 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.131111 kubelet[3133]: E0620 19:14:54.130943 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.131284 kubelet[3133]: E0620 19:14:54.131027 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.131284 kubelet[3133]: W0620 19:14:54.131033 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.131284 kubelet[3133]: E0620 19:14:54.131038 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.147395 kubelet[3133]: E0620 19:14:54.147356 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.147395 kubelet[3133]: W0620 19:14:54.147377 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.147395 kubelet[3133]: E0620 19:14:54.147396 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.147567 kubelet[3133]: E0620 19:14:54.147545 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.147567 kubelet[3133]: W0620 19:14:54.147551 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.147567 kubelet[3133]: E0620 19:14:54.147560 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.147730 kubelet[3133]: E0620 19:14:54.147689 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.147730 kubelet[3133]: W0620 19:14:54.147728 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.147849 kubelet[3133]: E0620 19:14:54.147745 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.147878 kubelet[3133]: E0620 19:14:54.147873 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.147878 kubelet[3133]: W0620 19:14:54.147879 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.147951 kubelet[3133]: E0620 19:14:54.147894 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.148041 kubelet[3133]: E0620 19:14:54.148024 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.148041 kubelet[3133]: W0620 19:14:54.148034 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.148112 kubelet[3133]: E0620 19:14:54.148050 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.148215 kubelet[3133]: E0620 19:14:54.148187 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.148255 kubelet[3133]: W0620 19:14:54.148228 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.148255 kubelet[3133]: E0620 19:14:54.148241 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.148499 kubelet[3133]: E0620 19:14:54.148464 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.148499 kubelet[3133]: W0620 19:14:54.148486 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.148499 kubelet[3133]: E0620 19:14:54.148497 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.148687 kubelet[3133]: E0620 19:14:54.148675 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.148687 kubelet[3133]: W0620 19:14:54.148684 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.148765 kubelet[3133]: E0620 19:14:54.148708 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.148828 kubelet[3133]: E0620 19:14:54.148818 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.148855 kubelet[3133]: W0620 19:14:54.148843 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.148930 kubelet[3133]: E0620 19:14:54.148916 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.148994 kubelet[3133]: E0620 19:14:54.148970 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.148994 kubelet[3133]: W0620 19:14:54.148991 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.149073 kubelet[3133]: E0620 19:14:54.149062 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.149130 kubelet[3133]: E0620 19:14:54.149118 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.149130 kubelet[3133]: W0620 19:14:54.149127 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.149184 kubelet[3133]: E0620 19:14:54.149145 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.149317 kubelet[3133]: E0620 19:14:54.149307 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.149317 kubelet[3133]: W0620 19:14:54.149314 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.149374 kubelet[3133]: E0620 19:14:54.149328 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.149498 kubelet[3133]: E0620 19:14:54.149473 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.149498 kubelet[3133]: W0620 19:14:54.149496 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.149550 kubelet[3133]: E0620 19:14:54.149513 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.149708 kubelet[3133]: E0620 19:14:54.149629 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.149708 kubelet[3133]: W0620 19:14:54.149636 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.149708 kubelet[3133]: E0620 19:14:54.149644 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.149793 kubelet[3133]: E0620 19:14:54.149743 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.149793 kubelet[3133]: W0620 19:14:54.149748 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.149793 kubelet[3133]: E0620 19:14:54.149755 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.149984 kubelet[3133]: E0620 19:14:54.149858 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.149984 kubelet[3133]: W0620 19:14:54.149865 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.149984 kubelet[3133]: E0620 19:14:54.149871 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.150179 kubelet[3133]: E0620 19:14:54.150155 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.150179 kubelet[3133]: W0620 19:14:54.150166 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.150258 kubelet[3133]: E0620 19:14:54.150186 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.150311 kubelet[3133]: E0620 19:14:54.150298 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:54.150311 kubelet[3133]: W0620 19:14:54.150306 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:54.150364 kubelet[3133]: E0620 19:14:54.150313 3133 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:54.742900 containerd[1732]: time="2025-06-20T19:14:54.742800107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:54.752946 containerd[1732]: time="2025-06-20T19:14:54.752885122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4441627" Jun 20 19:14:54.756658 containerd[1732]: time="2025-06-20T19:14:54.756610759Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:54.761787 containerd[1732]: time="2025-06-20T19:14:54.761714932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:54.762264 containerd[1732]: time="2025-06-20T19:14:54.762125687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 1.434990362s" Jun 20 19:14:54.762264 containerd[1732]: time="2025-06-20T19:14:54.762158527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 20 19:14:54.764688 containerd[1732]: time="2025-06-20T19:14:54.764654102Z" level=info msg="CreateContainer within sandbox \"908304e01d69c1f52fd66c35463f5a5b8c0934d8ebdaa077fc8878ab543e4d09\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 19:14:54.787076 containerd[1732]: time="2025-06-20T19:14:54.785747744Z" level=info msg="Container bac8d8861223e6437d49168a07a87f30c1ed8eeb8f89e048ab2d1f77a4e4a65a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:54.818719 containerd[1732]: time="2025-06-20T19:14:54.818662495Z" level=info msg="CreateContainer within sandbox \"908304e01d69c1f52fd66c35463f5a5b8c0934d8ebdaa077fc8878ab543e4d09\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bac8d8861223e6437d49168a07a87f30c1ed8eeb8f89e048ab2d1f77a4e4a65a\"" Jun 20 19:14:54.819871 containerd[1732]: time="2025-06-20T19:14:54.819811493Z" level=info msg="StartContainer for \"bac8d8861223e6437d49168a07a87f30c1ed8eeb8f89e048ab2d1f77a4e4a65a\"" Jun 20 19:14:54.822583 containerd[1732]: time="2025-06-20T19:14:54.822537451Z" level=info msg="connecting to shim bac8d8861223e6437d49168a07a87f30c1ed8eeb8f89e048ab2d1f77a4e4a65a" address="unix:///run/containerd/s/f207414efa28409e68d914f1a158d382bb1b55196c3ba5adf51ed133eef5a0a4" protocol=ttrpc version=3 Jun 20 19:14:54.848947 systemd[1]: Started cri-containerd-bac8d8861223e6437d49168a07a87f30c1ed8eeb8f89e048ab2d1f77a4e4a65a.scope - libcontainer container bac8d8861223e6437d49168a07a87f30c1ed8eeb8f89e048ab2d1f77a4e4a65a. Jun 20 19:14:54.896947 containerd[1732]: time="2025-06-20T19:14:54.896904137Z" level=info msg="StartContainer for \"bac8d8861223e6437d49168a07a87f30c1ed8eeb8f89e048ab2d1f77a4e4a65a\" returns successfully" Jun 20 19:14:54.900999 systemd[1]: cri-containerd-bac8d8861223e6437d49168a07a87f30c1ed8eeb8f89e048ab2d1f77a4e4a65a.scope: Deactivated successfully. Jun 20 19:14:54.904268 containerd[1732]: time="2025-06-20T19:14:54.904003839Z" level=info msg="received exit event container_id:\"bac8d8861223e6437d49168a07a87f30c1ed8eeb8f89e048ab2d1f77a4e4a65a\" id:\"bac8d8861223e6437d49168a07a87f30c1ed8eeb8f89e048ab2d1f77a4e4a65a\" pid:3822 exited_at:{seconds:1750446894 nanos:903627321}" Jun 20 19:14:54.904594 containerd[1732]: time="2025-06-20T19:14:54.904567301Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bac8d8861223e6437d49168a07a87f30c1ed8eeb8f89e048ab2d1f77a4e4a65a\" id:\"bac8d8861223e6437d49168a07a87f30c1ed8eeb8f89e048ab2d1f77a4e4a65a\" pid:3822 exited_at:{seconds:1750446894 nanos:903627321}" Jun 20 19:14:54.924688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bac8d8861223e6437d49168a07a87f30c1ed8eeb8f89e048ab2d1f77a4e4a65a-rootfs.mount: Deactivated successfully. Jun 20 19:14:55.015794 kubelet[3133]: E0620 19:14:55.015644 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcgqf" podUID="7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0" Jun 20 19:14:55.102042 kubelet[3133]: I0620 19:14:55.101819 3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:14:55.117595 kubelet[3133]: I0620 19:14:55.117534 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f9544856c-hmskt" podStartSLOduration=2.578551753 podStartE2EDuration="5.117514077s" podCreationTimestamp="2025-06-20 19:14:50 +0000 UTC" firstStartedPulling="2025-06-20 19:14:50.788019687 +0000 UTC m=+17.912159500" lastFinishedPulling="2025-06-20 19:14:53.326982017 +0000 UTC m=+20.451121824" observedRunningTime="2025-06-20 19:14:54.111865068 +0000 UTC m=+21.236004884" watchObservedRunningTime="2025-06-20 19:14:55.117514077 +0000 UTC m=+22.241653965" Jun 20 19:14:57.016721 kubelet[3133]: E0620 19:14:57.016067 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcgqf" podUID="7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0" Jun 20 19:14:57.108533 containerd[1732]: time="2025-06-20T19:14:57.107741680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 19:14:59.015935 kubelet[3133]: E0620 19:14:59.015473 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcgqf" podUID="7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0" Jun 20 19:15:01.015715 kubelet[3133]: E0620 19:15:01.015465 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcgqf" podUID="7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0" Jun 20 19:15:03.015931 kubelet[3133]: E0620 19:15:03.015863 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcgqf" podUID="7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0" Jun 20 19:15:04.328995 containerd[1732]: time="2025-06-20T19:15:04.328946144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:04.374518 containerd[1732]: time="2025-06-20T19:15:04.374461960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 20 19:15:04.420640 containerd[1732]: time="2025-06-20T19:15:04.420560382Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:04.424979 containerd[1732]: time="2025-06-20T19:15:04.424932023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:04.425668 containerd[1732]: time="2025-06-20T19:15:04.425362442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 7.31757571s" Jun 20 19:15:04.425668 containerd[1732]: time="2025-06-20T19:15:04.425392112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 20 19:15:04.427328 containerd[1732]: time="2025-06-20T19:15:04.427292033Z" level=info msg="CreateContainer within sandbox \"908304e01d69c1f52fd66c35463f5a5b8c0934d8ebdaa077fc8878ab543e4d09\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 19:15:04.633956 containerd[1732]: time="2025-06-20T19:15:04.633844541Z" level=info msg="Container 420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:04.770684 containerd[1732]: time="2025-06-20T19:15:04.770635584Z" level=info msg="CreateContainer within sandbox \"908304e01d69c1f52fd66c35463f5a5b8c0934d8ebdaa077fc8878ab543e4d09\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e\"" Jun 20 19:15:04.772542 containerd[1732]: time="2025-06-20T19:15:04.771276461Z" level=info msg="StartContainer for \"420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e\"" Jun 20 19:15:04.773201 containerd[1732]: time="2025-06-20T19:15:04.773168011Z" level=info msg="connecting to shim 420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e" address="unix:///run/containerd/s/f207414efa28409e68d914f1a158d382bb1b55196c3ba5adf51ed133eef5a0a4" protocol=ttrpc version=3 Jun 20 19:15:04.790890 systemd[1]: Started cri-containerd-420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e.scope - libcontainer container 420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e. Jun 20 19:15:04.830154 containerd[1732]: time="2025-06-20T19:15:04.830093146Z" level=info msg="StartContainer for \"420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e\" returns successfully" Jun 20 19:15:05.016823 kubelet[3133]: E0620 19:15:05.016003 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcgqf" podUID="7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0" Jun 20 19:15:07.016713 kubelet[3133]: E0620 19:15:07.015615 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcgqf" podUID="7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0" Jun 20 19:15:08.087016 kubelet[3133]: I0620 19:15:08.086588 3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:15:09.016249 kubelet[3133]: E0620 19:15:09.015820 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcgqf" podUID="7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0" Jun 20 19:15:11.016400 kubelet[3133]: E0620 19:15:11.016028 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcgqf" podUID="7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0" Jun 20 19:15:12.007273 containerd[1732]: time="2025-06-20T19:15:12.006826091Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:15:12.008527 systemd[1]: cri-containerd-420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e.scope: Deactivated successfully. Jun 20 19:15:12.009140 systemd[1]: cri-containerd-420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e.scope: Consumed 430ms CPU time, 191M memory peak, 171.2M written to disk. Jun 20 19:15:12.011185 containerd[1732]: time="2025-06-20T19:15:12.011077753Z" level=info msg="received exit event container_id:\"420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e\" id:\"420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e\" pid:3885 exited_at:{seconds:1750446912 nanos:10859760}" Jun 20 19:15:12.011284 containerd[1732]: time="2025-06-20T19:15:12.011222139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e\" id:\"420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e\" pid:3885 exited_at:{seconds:1750446912 nanos:10859760}" Jun 20 19:15:12.030417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-420d3b7b89f78f2a95f961cb274f759b80c295c4510cc3fab7fa29a5ea79625e-rootfs.mount: Deactivated successfully. Jun 20 19:15:12.034032 kubelet[3133]: I0620 19:15:12.032381 3133 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:15:12.075473 systemd[1]: Created slice kubepods-burstable-podf6f972ec_3558_420c_8e2b_8fd07b233bae.slice - libcontainer container kubepods-burstable-podf6f972ec_3558_420c_8e2b_8fd07b233bae.slice. Jun 20 19:15:12.087415 systemd[1]: Created slice kubepods-besteffort-pod5b050fb4_bc24_49ba_b12d_7dd8dadfe712.slice - libcontainer container kubepods-besteffort-pod5b050fb4_bc24_49ba_b12d_7dd8dadfe712.slice. Jun 20 19:15:12.096838 systemd[1]: Created slice kubepods-besteffort-pod1ea98c83_5bbf_4665_a250_04d974d93140.slice - libcontainer container kubepods-besteffort-pod1ea98c83_5bbf_4665_a250_04d974d93140.slice. Jun 20 19:15:12.108087 systemd[1]: Created slice kubepods-besteffort-pod643a3c1d_c11b_429d_af2c_62cba85afc5a.slice - libcontainer container kubepods-besteffort-pod643a3c1d_c11b_429d_af2c_62cba85afc5a.slice. Jun 20 19:15:12.116654 systemd[1]: Created slice kubepods-burstable-podf7a4c679_0707_404c_9de0_33337a161752.slice - libcontainer container kubepods-burstable-podf7a4c679_0707_404c_9de0_33337a161752.slice. Jun 20 19:15:12.122846 systemd[1]: Created slice kubepods-besteffort-podf70ef370_41e2_4b31_b596_90be9e228851.slice - libcontainer container kubepods-besteffort-podf70ef370_41e2_4b31_b596_90be9e228851.slice. Jun 20 19:15:12.131497 systemd[1]: Created slice kubepods-besteffort-pod529edae3_7d6f_495a_99af_8dcab5ab6f83.slice - libcontainer container kubepods-besteffort-pod529edae3_7d6f_495a_99af_8dcab5ab6f83.slice. Jun 20 19:15:12.165894 kubelet[3133]: I0620 19:15:12.165855 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/643a3c1d-c11b-429d-af2c-62cba85afc5a-calico-apiserver-certs\") pod \"calico-apiserver-56bc98767b-z6f9b\" (UID: \"643a3c1d-c11b-429d-af2c-62cba85afc5a\") " pod="calico-apiserver/calico-apiserver-56bc98767b-z6f9b" Jun 20 19:15:12.166341 kubelet[3133]: I0620 19:15:12.165938 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b050fb4-bc24-49ba-b12d-7dd8dadfe712-whisker-backend-key-pair\") pod \"whisker-7f7cf4d9bf-czdrn\" (UID: \"5b050fb4-bc24-49ba-b12d-7dd8dadfe712\") " pod="calico-system/whisker-7f7cf4d9bf-czdrn" Jun 20 19:15:12.166434 kubelet[3133]: I0620 19:15:12.166372 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8bcq\" (UniqueName: \"kubernetes.io/projected/529edae3-7d6f-495a-99af-8dcab5ab6f83-kube-api-access-h8bcq\") pod \"calico-kube-controllers-6595c785f9-rstkf\" (UID: \"529edae3-7d6f-495a-99af-8dcab5ab6f83\") " pod="calico-system/calico-kube-controllers-6595c785f9-rstkf" Jun 20 19:15:12.166434 kubelet[3133]: I0620 19:15:12.166395 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f70ef370-41e2-4b31-b596-90be9e228851-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-t5qzq\" (UID: \"f70ef370-41e2-4b31-b596-90be9e228851\") " pod="calico-system/goldmane-5bd85449d4-t5qzq" Jun 20 19:15:12.166434 kubelet[3133]: I0620 19:15:12.166417 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b050fb4-bc24-49ba-b12d-7dd8dadfe712-whisker-ca-bundle\") pod \"whisker-7f7cf4d9bf-czdrn\" (UID: \"5b050fb4-bc24-49ba-b12d-7dd8dadfe712\") " pod="calico-system/whisker-7f7cf4d9bf-czdrn" Jun 20 19:15:12.166521 kubelet[3133]: I0620 19:15:12.166434 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cxpc\" (UniqueName: \"kubernetes.io/projected/5b050fb4-bc24-49ba-b12d-7dd8dadfe712-kube-api-access-4cxpc\") pod \"whisker-7f7cf4d9bf-czdrn\" (UID: \"5b050fb4-bc24-49ba-b12d-7dd8dadfe712\") " pod="calico-system/whisker-7f7cf4d9bf-czdrn" Jun 20 19:15:12.166521 kubelet[3133]: I0620 19:15:12.166453 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6f972ec-3558-420c-8e2b-8fd07b233bae-config-volume\") pod \"coredns-668d6bf9bc-gthft\" (UID: \"f6f972ec-3558-420c-8e2b-8fd07b233bae\") " pod="kube-system/coredns-668d6bf9bc-gthft" Jun 20 19:15:12.166521 kubelet[3133]: I0620 19:15:12.166476 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/529edae3-7d6f-495a-99af-8dcab5ab6f83-tigera-ca-bundle\") pod \"calico-kube-controllers-6595c785f9-rstkf\" (UID: \"529edae3-7d6f-495a-99af-8dcab5ab6f83\") " pod="calico-system/calico-kube-controllers-6595c785f9-rstkf" Jun 20 19:15:12.166521 kubelet[3133]: I0620 19:15:12.166493 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n67jq\" (UniqueName: \"kubernetes.io/projected/f6f972ec-3558-420c-8e2b-8fd07b233bae-kube-api-access-n67jq\") pod \"coredns-668d6bf9bc-gthft\" (UID: \"f6f972ec-3558-420c-8e2b-8fd07b233bae\") " pod="kube-system/coredns-668d6bf9bc-gthft" Jun 20 19:15:12.166521 kubelet[3133]: I0620 19:15:12.166511 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1ea98c83-5bbf-4665-a250-04d974d93140-calico-apiserver-certs\") pod \"calico-apiserver-56bc98767b-dgvxb\" (UID: \"1ea98c83-5bbf-4665-a250-04d974d93140\") " pod="calico-apiserver/calico-apiserver-56bc98767b-dgvxb" Jun 20 19:15:12.166641 kubelet[3133]: I0620 19:15:12.166536 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lmsj\" (UniqueName: \"kubernetes.io/projected/f7a4c679-0707-404c-9de0-33337a161752-kube-api-access-9lmsj\") pod \"coredns-668d6bf9bc-mcth6\" (UID: \"f7a4c679-0707-404c-9de0-33337a161752\") " pod="kube-system/coredns-668d6bf9bc-mcth6" Jun 20 19:15:12.166641 kubelet[3133]: I0620 19:15:12.166560 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f70ef370-41e2-4b31-b596-90be9e228851-goldmane-key-pair\") pod \"goldmane-5bd85449d4-t5qzq\" (UID: \"f70ef370-41e2-4b31-b596-90be9e228851\") " pod="calico-system/goldmane-5bd85449d4-t5qzq" Jun 20 19:15:12.166641 kubelet[3133]: I0620 19:15:12.166579 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v4gn\" (UniqueName: \"kubernetes.io/projected/1ea98c83-5bbf-4665-a250-04d974d93140-kube-api-access-8v4gn\") pod \"calico-apiserver-56bc98767b-dgvxb\" (UID: \"1ea98c83-5bbf-4665-a250-04d974d93140\") " pod="calico-apiserver/calico-apiserver-56bc98767b-dgvxb" Jun 20 19:15:12.166641 kubelet[3133]: I0620 19:15:12.166600 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7a4c679-0707-404c-9de0-33337a161752-config-volume\") pod \"coredns-668d6bf9bc-mcth6\" (UID: \"f7a4c679-0707-404c-9de0-33337a161752\") " pod="kube-system/coredns-668d6bf9bc-mcth6" Jun 20 19:15:12.166641 kubelet[3133]: I0620 19:15:12.166625 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtmbj\" (UniqueName: \"kubernetes.io/projected/643a3c1d-c11b-429d-af2c-62cba85afc5a-kube-api-access-wtmbj\") pod \"calico-apiserver-56bc98767b-z6f9b\" (UID: \"643a3c1d-c11b-429d-af2c-62cba85afc5a\") " pod="calico-apiserver/calico-apiserver-56bc98767b-z6f9b" Jun 20 19:15:12.166786 kubelet[3133]: I0620 19:15:12.166651 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f70ef370-41e2-4b31-b596-90be9e228851-config\") pod \"goldmane-5bd85449d4-t5qzq\" (UID: \"f70ef370-41e2-4b31-b596-90be9e228851\") " pod="calico-system/goldmane-5bd85449d4-t5qzq" Jun 20 19:15:12.166786 kubelet[3133]: I0620 19:15:12.166669 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87c97\" (UniqueName: \"kubernetes.io/projected/f70ef370-41e2-4b31-b596-90be9e228851-kube-api-access-87c97\") pod \"goldmane-5bd85449d4-t5qzq\" (UID: \"f70ef370-41e2-4b31-b596-90be9e228851\") " pod="calico-system/goldmane-5bd85449d4-t5qzq" Jun 20 19:15:12.383428 containerd[1732]: time="2025-06-20T19:15:12.383382264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gthft,Uid:f6f972ec-3558-420c-8e2b-8fd07b233bae,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:12.394012 containerd[1732]: time="2025-06-20T19:15:12.393973365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f7cf4d9bf-czdrn,Uid:5b050fb4-bc24-49ba-b12d-7dd8dadfe712,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:12.400845 containerd[1732]: time="2025-06-20T19:15:12.400804104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bc98767b-dgvxb,Uid:1ea98c83-5bbf-4665-a250-04d974d93140,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:15:12.413374 containerd[1732]: time="2025-06-20T19:15:12.413336206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bc98767b-z6f9b,Uid:643a3c1d-c11b-429d-af2c-62cba85afc5a,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:15:12.419923 containerd[1732]: time="2025-06-20T19:15:12.419892781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mcth6,Uid:f7a4c679-0707-404c-9de0-33337a161752,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:12.427459 containerd[1732]: time="2025-06-20T19:15:12.427426512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-t5qzq,Uid:f70ef370-41e2-4b31-b596-90be9e228851,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:12.435014 containerd[1732]: time="2025-06-20T19:15:12.434959724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6595c785f9-rstkf,Uid:529edae3-7d6f-495a-99af-8dcab5ab6f83,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:13.021363 systemd[1]: Created slice kubepods-besteffort-pod7afa5fe6_9ec0_45d7_b74a_d76e477ad3c0.slice - libcontainer container kubepods-besteffort-pod7afa5fe6_9ec0_45d7_b74a_d76e477ad3c0.slice. Jun 20 19:15:13.023656 containerd[1732]: time="2025-06-20T19:15:13.023607009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcgqf,Uid:7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:14.474199 containerd[1732]: time="2025-06-20T19:15:14.474147520Z" level=error msg="Failed to destroy network for sandbox \"c21584608a91c5177b858e7d81c4f64080e7e344185e3cb22a6814ab44a9f97d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:14.476871 systemd[1]: run-netns-cni\x2d8d24dab3\x2dee60\x2d419d\x2d4438\x2df0c27d91fdc0.mount: Deactivated successfully. Jun 20 19:15:14.926464 containerd[1732]: time="2025-06-20T19:15:14.926324017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f7cf4d9bf-czdrn,Uid:5b050fb4-bc24-49ba-b12d-7dd8dadfe712,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c21584608a91c5177b858e7d81c4f64080e7e344185e3cb22a6814ab44a9f97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:14.926878 kubelet[3133]: E0620 19:15:14.926819 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c21584608a91c5177b858e7d81c4f64080e7e344185e3cb22a6814ab44a9f97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:14.927515 kubelet[3133]: E0620 19:15:14.927279 3133 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c21584608a91c5177b858e7d81c4f64080e7e344185e3cb22a6814ab44a9f97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f7cf4d9bf-czdrn" Jun 20 19:15:14.927515 kubelet[3133]: E0620 19:15:14.927322 3133 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c21584608a91c5177b858e7d81c4f64080e7e344185e3cb22a6814ab44a9f97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f7cf4d9bf-czdrn" Jun 20 19:15:14.927515 kubelet[3133]: E0620 19:15:14.927391 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f7cf4d9bf-czdrn_calico-system(5b050fb4-bc24-49ba-b12d-7dd8dadfe712)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f7cf4d9bf-czdrn_calico-system(5b050fb4-bc24-49ba-b12d-7dd8dadfe712)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c21584608a91c5177b858e7d81c4f64080e7e344185e3cb22a6814ab44a9f97d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f7cf4d9bf-czdrn" podUID="5b050fb4-bc24-49ba-b12d-7dd8dadfe712" Jun 20 19:15:15.062408 containerd[1732]: time="2025-06-20T19:15:15.062277773Z" level=error msg="Failed to destroy network for sandbox \"33aee71abacb23b10f850b9453365d7f3c7f4458cfe0e39fbe8329b0e092911f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.066965 containerd[1732]: time="2025-06-20T19:15:15.066768702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gthft,Uid:f6f972ec-3558-420c-8e2b-8fd07b233bae,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33aee71abacb23b10f850b9453365d7f3c7f4458cfe0e39fbe8329b0e092911f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.069722 kubelet[3133]: E0620 19:15:15.069301 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33aee71abacb23b10f850b9453365d7f3c7f4458cfe0e39fbe8329b0e092911f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.069722 kubelet[3133]: E0620 19:15:15.069370 3133 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33aee71abacb23b10f850b9453365d7f3c7f4458cfe0e39fbe8329b0e092911f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gthft" Jun 20 19:15:15.069722 kubelet[3133]: E0620 19:15:15.069394 3133 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33aee71abacb23b10f850b9453365d7f3c7f4458cfe0e39fbe8329b0e092911f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gthft" Jun 20 19:15:15.069900 kubelet[3133]: E0620 19:15:15.069440 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gthft_kube-system(f6f972ec-3558-420c-8e2b-8fd07b233bae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gthft_kube-system(f6f972ec-3558-420c-8e2b-8fd07b233bae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33aee71abacb23b10f850b9453365d7f3c7f4458cfe0e39fbe8329b0e092911f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gthft" podUID="f6f972ec-3558-420c-8e2b-8fd07b233bae" Jun 20 19:15:15.123059 containerd[1732]: time="2025-06-20T19:15:15.123006951Z" level=error msg="Failed to destroy network for sandbox \"30957e9c3f1573c09097f66b8aabc528fb15bda8dd9a4bd6b59e5e34681bbd43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.123420 containerd[1732]: time="2025-06-20T19:15:15.123268847Z" level=error msg="Failed to destroy network for sandbox \"9f27dd04764e568224678c9af1f1baaf858d2dc829216b8003a9a2badd6416e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.124714 containerd[1732]: time="2025-06-20T19:15:15.124604268Z" level=error msg="Failed to destroy network for sandbox \"bd86971446d5e8b92b2181eecde781aea6952a17e1e93df0057d81045cf31b84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.127610 containerd[1732]: time="2025-06-20T19:15:15.127571975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mcth6,Uid:f7a4c679-0707-404c-9de0-33337a161752,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f27dd04764e568224678c9af1f1baaf858d2dc829216b8003a9a2badd6416e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.128124 kubelet[3133]: E0620 19:15:15.128083 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f27dd04764e568224678c9af1f1baaf858d2dc829216b8003a9a2badd6416e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.128201 kubelet[3133]: E0620 19:15:15.128153 3133 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f27dd04764e568224678c9af1f1baaf858d2dc829216b8003a9a2badd6416e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mcth6" Jun 20 19:15:15.128201 kubelet[3133]: E0620 19:15:15.128178 3133 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f27dd04764e568224678c9af1f1baaf858d2dc829216b8003a9a2badd6416e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mcth6" Jun 20 19:15:15.128271 kubelet[3133]: E0620 19:15:15.128237 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mcth6_kube-system(f7a4c679-0707-404c-9de0-33337a161752)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mcth6_kube-system(f7a4c679-0707-404c-9de0-33337a161752)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f27dd04764e568224678c9af1f1baaf858d2dc829216b8003a9a2badd6416e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mcth6" podUID="f7a4c679-0707-404c-9de0-33337a161752" Jun 20 19:15:15.129440 containerd[1732]: time="2025-06-20T19:15:15.129335664Z" level=error msg="Failed to destroy network for sandbox \"0f6b72a736488027d03ec2f44b315cfcad7a95ae147f27ff447e5b485ada3009\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.130430 containerd[1732]: time="2025-06-20T19:15:15.130397167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-t5qzq,Uid:f70ef370-41e2-4b31-b596-90be9e228851,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30957e9c3f1573c09097f66b8aabc528fb15bda8dd9a4bd6b59e5e34681bbd43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.131169 kubelet[3133]: E0620 19:15:15.131125 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30957e9c3f1573c09097f66b8aabc528fb15bda8dd9a4bd6b59e5e34681bbd43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.131349 kubelet[3133]: E0620 19:15:15.131186 3133 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30957e9c3f1573c09097f66b8aabc528fb15bda8dd9a4bd6b59e5e34681bbd43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-t5qzq" Jun 20 19:15:15.131349 kubelet[3133]: E0620 19:15:15.131207 3133 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30957e9c3f1573c09097f66b8aabc528fb15bda8dd9a4bd6b59e5e34681bbd43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-t5qzq" Jun 20 19:15:15.131349 kubelet[3133]: E0620 19:15:15.131249 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-t5qzq_calico-system(f70ef370-41e2-4b31-b596-90be9e228851)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-t5qzq_calico-system(f70ef370-41e2-4b31-b596-90be9e228851)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30957e9c3f1573c09097f66b8aabc528fb15bda8dd9a4bd6b59e5e34681bbd43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-t5qzq" podUID="f70ef370-41e2-4b31-b596-90be9e228851" Jun 20 19:15:15.135420 containerd[1732]: time="2025-06-20T19:15:15.135085166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bc98767b-z6f9b,Uid:643a3c1d-c11b-429d-af2c-62cba85afc5a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd86971446d5e8b92b2181eecde781aea6952a17e1e93df0057d81045cf31b84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.135536 kubelet[3133]: E0620 19:15:15.135273 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd86971446d5e8b92b2181eecde781aea6952a17e1e93df0057d81045cf31b84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.135536 kubelet[3133]: E0620 19:15:15.135317 3133 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd86971446d5e8b92b2181eecde781aea6952a17e1e93df0057d81045cf31b84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56bc98767b-z6f9b" Jun 20 19:15:15.135536 kubelet[3133]: E0620 19:15:15.135338 3133 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd86971446d5e8b92b2181eecde781aea6952a17e1e93df0057d81045cf31b84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56bc98767b-z6f9b" Jun 20 19:15:15.135639 kubelet[3133]: E0620 19:15:15.135373 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56bc98767b-z6f9b_calico-apiserver(643a3c1d-c11b-429d-af2c-62cba85afc5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56bc98767b-z6f9b_calico-apiserver(643a3c1d-c11b-429d-af2c-62cba85afc5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd86971446d5e8b92b2181eecde781aea6952a17e1e93df0057d81045cf31b84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56bc98767b-z6f9b" podUID="643a3c1d-c11b-429d-af2c-62cba85afc5a" Jun 20 19:15:15.137911 containerd[1732]: time="2025-06-20T19:15:15.137799821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bc98767b-dgvxb,Uid:1ea98c83-5bbf-4665-a250-04d974d93140,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f6b72a736488027d03ec2f44b315cfcad7a95ae147f27ff447e5b485ada3009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.138097 kubelet[3133]: E0620 19:15:15.137983 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f6b72a736488027d03ec2f44b315cfcad7a95ae147f27ff447e5b485ada3009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.138154 kubelet[3133]: E0620 19:15:15.138118 3133 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f6b72a736488027d03ec2f44b315cfcad7a95ae147f27ff447e5b485ada3009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56bc98767b-dgvxb" Jun 20 19:15:15.138154 kubelet[3133]: E0620 19:15:15.138138 3133 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f6b72a736488027d03ec2f44b315cfcad7a95ae147f27ff447e5b485ada3009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56bc98767b-dgvxb" Jun 20 19:15:15.138373 kubelet[3133]: E0620 19:15:15.138278 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56bc98767b-dgvxb_calico-apiserver(1ea98c83-5bbf-4665-a250-04d974d93140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56bc98767b-dgvxb_calico-apiserver(1ea98c83-5bbf-4665-a250-04d974d93140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f6b72a736488027d03ec2f44b315cfcad7a95ae147f27ff447e5b485ada3009\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56bc98767b-dgvxb" podUID="1ea98c83-5bbf-4665-a250-04d974d93140" Jun 20 19:15:15.147895 containerd[1732]: time="2025-06-20T19:15:15.147850105Z" level=error msg="Failed to destroy network for sandbox \"5be83c1cf55f71fc3a8044ee3142a00ddd97dd2be7b9cff0c61aebb4e2964714\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.151191 containerd[1732]: time="2025-06-20T19:15:15.150925749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 19:15:15.152611 containerd[1732]: time="2025-06-20T19:15:15.151465986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcgqf,Uid:7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5be83c1cf55f71fc3a8044ee3142a00ddd97dd2be7b9cff0c61aebb4e2964714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.152971 kubelet[3133]: E0620 19:15:15.152941 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5be83c1cf55f71fc3a8044ee3142a00ddd97dd2be7b9cff0c61aebb4e2964714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.153713 kubelet[3133]: E0620 19:15:15.153671 3133 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5be83c1cf55f71fc3a8044ee3142a00ddd97dd2be7b9cff0c61aebb4e2964714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcgqf" Jun 20 19:15:15.153902 kubelet[3133]: E0620 19:15:15.153837 3133 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5be83c1cf55f71fc3a8044ee3142a00ddd97dd2be7b9cff0c61aebb4e2964714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcgqf" Jun 20 19:15:15.154644 kubelet[3133]: E0620 19:15:15.154594 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tcgqf_calico-system(7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tcgqf_calico-system(7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5be83c1cf55f71fc3a8044ee3142a00ddd97dd2be7b9cff0c61aebb4e2964714\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tcgqf" podUID="7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0" Jun 20 19:15:15.155907 containerd[1732]: time="2025-06-20T19:15:15.155865258Z" level=error msg="Failed to destroy network for sandbox \"f0abb7a7fd47773bcf6495d6ea448d7def164748ca8b1a0119130d919ba1c580\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.159273 containerd[1732]: time="2025-06-20T19:15:15.159237635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6595c785f9-rstkf,Uid:529edae3-7d6f-495a-99af-8dcab5ab6f83,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0abb7a7fd47773bcf6495d6ea448d7def164748ca8b1a0119130d919ba1c580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.159446 kubelet[3133]: E0620 19:15:15.159423 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0abb7a7fd47773bcf6495d6ea448d7def164748ca8b1a0119130d919ba1c580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:15.159491 kubelet[3133]: E0620 19:15:15.159463 3133 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0abb7a7fd47773bcf6495d6ea448d7def164748ca8b1a0119130d919ba1c580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6595c785f9-rstkf" Jun 20 19:15:15.159523 kubelet[3133]: E0620 19:15:15.159484 3133 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0abb7a7fd47773bcf6495d6ea448d7def164748ca8b1a0119130d919ba1c580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6595c785f9-rstkf" Jun 20 19:15:15.159549 kubelet[3133]: E0620 19:15:15.159521 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6595c785f9-rstkf_calico-system(529edae3-7d6f-495a-99af-8dcab5ab6f83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6595c785f9-rstkf_calico-system(529edae3-7d6f-495a-99af-8dcab5ab6f83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0abb7a7fd47773bcf6495d6ea448d7def164748ca8b1a0119130d919ba1c580\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6595c785f9-rstkf" podUID="529edae3-7d6f-495a-99af-8dcab5ab6f83" Jun 20 19:15:15.245666 systemd[1]: run-netns-cni\x2d0e0963d1\x2dd74d\x2de22b\x2d0a5c\x2d60a083e4607d.mount: Deactivated successfully. Jun 20 19:15:15.246081 systemd[1]: run-netns-cni\x2d206676a9\x2d12a2\x2d1263\x2dd52f\x2d2edea7c4add3.mount: Deactivated successfully. Jun 20 19:15:15.246146 systemd[1]: run-netns-cni\x2d5d9ba050\x2dd8a4\x2d1702\x2df2f3\x2d736e0f53a77a.mount: Deactivated successfully. Jun 20 19:15:15.246197 systemd[1]: run-netns-cni\x2d582a2be4\x2d5aee\x2def0f\x2da56f\x2d05dd042a66d3.mount: Deactivated successfully. Jun 20 19:15:19.471651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1214387134.mount: Deactivated successfully. Jun 20 19:15:19.497147 containerd[1732]: time="2025-06-20T19:15:19.497087249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:19.499718 containerd[1732]: time="2025-06-20T19:15:19.499677375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 20 19:15:19.505957 containerd[1732]: time="2025-06-20T19:15:19.505876542Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:19.509810 containerd[1732]: time="2025-06-20T19:15:19.509760778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:19.510132 containerd[1732]: time="2025-06-20T19:15:19.510074442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 4.358562944s" Jun 20 19:15:19.510132 containerd[1732]: time="2025-06-20T19:15:19.510108004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 20 19:15:19.523546 containerd[1732]: time="2025-06-20T19:15:19.523510652Z" level=info msg="CreateContainer within sandbox \"908304e01d69c1f52fd66c35463f5a5b8c0934d8ebdaa077fc8878ab543e4d09\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 19:15:19.549661 containerd[1732]: time="2025-06-20T19:15:19.546175871Z" level=info msg="Container bbd6ffdf315ecc94ffff356a0813588b735f49e5db96528afce24fd34fb10381: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:19.573634 containerd[1732]: time="2025-06-20T19:15:19.573145464Z" level=info msg="CreateContainer within sandbox \"908304e01d69c1f52fd66c35463f5a5b8c0934d8ebdaa077fc8878ab543e4d09\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bbd6ffdf315ecc94ffff356a0813588b735f49e5db96528afce24fd34fb10381\"" Jun 20 19:15:19.574974 containerd[1732]: time="2025-06-20T19:15:19.574938638Z" level=info msg="StartContainer for \"bbd6ffdf315ecc94ffff356a0813588b735f49e5db96528afce24fd34fb10381\"" Jun 20 19:15:19.579078 containerd[1732]: time="2025-06-20T19:15:19.578957722Z" level=info msg="connecting to shim bbd6ffdf315ecc94ffff356a0813588b735f49e5db96528afce24fd34fb10381" address="unix:///run/containerd/s/f207414efa28409e68d914f1a158d382bb1b55196c3ba5adf51ed133eef5a0a4" protocol=ttrpc version=3 Jun 20 19:15:19.601853 systemd[1]: Started cri-containerd-bbd6ffdf315ecc94ffff356a0813588b735f49e5db96528afce24fd34fb10381.scope - libcontainer container bbd6ffdf315ecc94ffff356a0813588b735f49e5db96528afce24fd34fb10381. Jun 20 19:15:19.636821 containerd[1732]: time="2025-06-20T19:15:19.636750407Z" level=info msg="StartContainer for \"bbd6ffdf315ecc94ffff356a0813588b735f49e5db96528afce24fd34fb10381\" returns successfully" Jun 20 19:15:19.888136 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 19:15:19.888283 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 19:15:20.017490 kubelet[3133]: I0620 19:15:20.016958 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b050fb4-bc24-49ba-b12d-7dd8dadfe712-whisker-ca-bundle\") pod \"5b050fb4-bc24-49ba-b12d-7dd8dadfe712\" (UID: \"5b050fb4-bc24-49ba-b12d-7dd8dadfe712\") " Jun 20 19:15:20.017490 kubelet[3133]: I0620 19:15:20.017014 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cxpc\" (UniqueName: \"kubernetes.io/projected/5b050fb4-bc24-49ba-b12d-7dd8dadfe712-kube-api-access-4cxpc\") pod \"5b050fb4-bc24-49ba-b12d-7dd8dadfe712\" (UID: \"5b050fb4-bc24-49ba-b12d-7dd8dadfe712\") " Jun 20 19:15:20.017490 kubelet[3133]: I0620 19:15:20.017055 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b050fb4-bc24-49ba-b12d-7dd8dadfe712-whisker-backend-key-pair\") pod \"5b050fb4-bc24-49ba-b12d-7dd8dadfe712\" (UID: \"5b050fb4-bc24-49ba-b12d-7dd8dadfe712\") " Jun 20 19:15:20.020008 kubelet[3133]: I0620 19:15:20.019966 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b050fb4-bc24-49ba-b12d-7dd8dadfe712-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5b050fb4-bc24-49ba-b12d-7dd8dadfe712" (UID: "5b050fb4-bc24-49ba-b12d-7dd8dadfe712"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:15:20.022542 kubelet[3133]: I0620 19:15:20.022491 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b050fb4-bc24-49ba-b12d-7dd8dadfe712-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5b050fb4-bc24-49ba-b12d-7dd8dadfe712" (UID: "5b050fb4-bc24-49ba-b12d-7dd8dadfe712"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:15:20.025418 kubelet[3133]: I0620 19:15:20.025372 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b050fb4-bc24-49ba-b12d-7dd8dadfe712-kube-api-access-4cxpc" (OuterVolumeSpecName: "kube-api-access-4cxpc") pod "5b050fb4-bc24-49ba-b12d-7dd8dadfe712" (UID: "5b050fb4-bc24-49ba-b12d-7dd8dadfe712"). InnerVolumeSpecName "kube-api-access-4cxpc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:15:20.117677 kubelet[3133]: I0620 19:15:20.117635 3133 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b050fb4-bc24-49ba-b12d-7dd8dadfe712-whisker-ca-bundle\") on node \"ci-4344.1.0-a-69d2cbc98d\" DevicePath \"\"" Jun 20 19:15:20.117677 kubelet[3133]: I0620 19:15:20.117675 3133 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4cxpc\" (UniqueName: \"kubernetes.io/projected/5b050fb4-bc24-49ba-b12d-7dd8dadfe712-kube-api-access-4cxpc\") on node \"ci-4344.1.0-a-69d2cbc98d\" DevicePath \"\"" Jun 20 19:15:20.117677 kubelet[3133]: I0620 19:15:20.117710 3133 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b050fb4-bc24-49ba-b12d-7dd8dadfe712-whisker-backend-key-pair\") on node \"ci-4344.1.0-a-69d2cbc98d\" DevicePath \"\"" Jun 20 19:15:20.167367 systemd[1]: Removed slice kubepods-besteffort-pod5b050fb4_bc24_49ba_b12d_7dd8dadfe712.slice - libcontainer container kubepods-besteffort-pod5b050fb4_bc24_49ba_b12d_7dd8dadfe712.slice. Jun 20 19:15:20.203637 kubelet[3133]: I0620 19:15:20.203287 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-255jq" podStartSLOduration=3.284794273 podStartE2EDuration="30.20326744s" podCreationTimestamp="2025-06-20 19:14:50 +0000 UTC" firstStartedPulling="2025-06-20 19:14:52.59244177 +0000 UTC m=+19.716581583" lastFinishedPulling="2025-06-20 19:15:19.510914919 +0000 UTC m=+46.635054750" observedRunningTime="2025-06-20 19:15:20.188446367 +0000 UTC m=+47.312586184" watchObservedRunningTime="2025-06-20 19:15:20.20326744 +0000 UTC m=+47.327407256" Jun 20 19:15:20.269887 systemd[1]: Created slice kubepods-besteffort-pod6d3b8cf5_01f3_4940_86eb_d2c3235ac970.slice - libcontainer container kubepods-besteffort-pod6d3b8cf5_01f3_4940_86eb_d2c3235ac970.slice. Jun 20 19:15:20.276798 containerd[1732]: time="2025-06-20T19:15:20.273056748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbd6ffdf315ecc94ffff356a0813588b735f49e5db96528afce24fd34fb10381\" id:\"7e27814a5811cb1a378195185335a18619333e06f373fa12bcb1f1252be4bccf\" pid:4222 exit_status:1 exited_at:{seconds:1750446920 nanos:270786261}" Jun 20 19:15:20.319064 kubelet[3133]: I0620 19:15:20.318987 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjkj\" (UniqueName: \"kubernetes.io/projected/6d3b8cf5-01f3-4940-86eb-d2c3235ac970-kube-api-access-qkjkj\") pod \"whisker-d44fd6dcd-m855x\" (UID: \"6d3b8cf5-01f3-4940-86eb-d2c3235ac970\") " pod="calico-system/whisker-d44fd6dcd-m855x" Jun 20 19:15:20.319064 kubelet[3133]: I0620 19:15:20.319029 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6d3b8cf5-01f3-4940-86eb-d2c3235ac970-whisker-backend-key-pair\") pod \"whisker-d44fd6dcd-m855x\" (UID: \"6d3b8cf5-01f3-4940-86eb-d2c3235ac970\") " pod="calico-system/whisker-d44fd6dcd-m855x" Jun 20 19:15:20.319064 kubelet[3133]: I0620 19:15:20.319051 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d3b8cf5-01f3-4940-86eb-d2c3235ac970-whisker-ca-bundle\") pod \"whisker-d44fd6dcd-m855x\" (UID: \"6d3b8cf5-01f3-4940-86eb-d2c3235ac970\") " pod="calico-system/whisker-d44fd6dcd-m855x" Jun 20 19:15:20.471670 systemd[1]: var-lib-kubelet-pods-5b050fb4\x2dbc24\x2d49ba\x2db12d\x2d7dd8dadfe712-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4cxpc.mount: Deactivated successfully. Jun 20 19:15:20.471789 systemd[1]: var-lib-kubelet-pods-5b050fb4\x2dbc24\x2d49ba\x2db12d\x2d7dd8dadfe712-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 19:15:20.577291 containerd[1732]: time="2025-06-20T19:15:20.577249361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d44fd6dcd-m855x,Uid:6d3b8cf5-01f3-4940-86eb-d2c3235ac970,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:20.673883 systemd-networkd[1359]: calid1afebf4526: Link UP Jun 20 19:15:20.674805 systemd-networkd[1359]: calid1afebf4526: Gained carrier Jun 20 19:15:20.692482 containerd[1732]: 2025-06-20 19:15:20.603 [INFO][4236] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:15:20.692482 containerd[1732]: 2025-06-20 19:15:20.612 [INFO][4236] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-eth0 whisker-d44fd6dcd- calico-system 6d3b8cf5-01f3-4940-86eb-d2c3235ac970 887 0 2025-06-20 19:15:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d44fd6dcd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.0-a-69d2cbc98d whisker-d44fd6dcd-m855x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid1afebf4526 [] [] }} ContainerID="a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" Namespace="calico-system" Pod="whisker-d44fd6dcd-m855x" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-" Jun 20 19:15:20.692482 containerd[1732]: 2025-06-20 19:15:20.612 [INFO][4236] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" Namespace="calico-system" Pod="whisker-d44fd6dcd-m855x" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-eth0" Jun 20 19:15:20.692482 containerd[1732]: 2025-06-20 19:15:20.635 [INFO][4248] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" HandleID="k8s-pod-network.a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-eth0" Jun 20 19:15:20.692811 containerd[1732]: 2025-06-20 19:15:20.635 [INFO][4248] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" HandleID="k8s-pod-network.a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f250), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-69d2cbc98d", "pod":"whisker-d44fd6dcd-m855x", "timestamp":"2025-06-20 19:15:20.635516777 +0000 UTC"}, Hostname:"ci-4344.1.0-a-69d2cbc98d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:20.692811 containerd[1732]: 2025-06-20 19:15:20.635 [INFO][4248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:20.692811 containerd[1732]: 2025-06-20 19:15:20.635 [INFO][4248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:20.692811 containerd[1732]: 2025-06-20 19:15:20.635 [INFO][4248] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-69d2cbc98d' Jun 20 19:15:20.692811 containerd[1732]: 2025-06-20 19:15:20.640 [INFO][4248] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:20.692811 containerd[1732]: 2025-06-20 19:15:20.644 [INFO][4248] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:20.692811 containerd[1732]: 2025-06-20 19:15:20.647 [INFO][4248] ipam/ipam.go 511: Trying affinity for 192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:20.692811 containerd[1732]: 2025-06-20 19:15:20.649 [INFO][4248] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:20.692811 containerd[1732]: 2025-06-20 19:15:20.650 [INFO][4248] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:20.693022 containerd[1732]: 2025-06-20 19:15:20.650 [INFO][4248] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:20.693022 containerd[1732]: 2025-06-20 19:15:20.651 [INFO][4248] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545 Jun 20 19:15:20.693022 containerd[1732]: 2025-06-20 19:15:20.659 [INFO][4248] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:20.693022 containerd[1732]: 2025-06-20 19:15:20.664 [INFO][4248] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.193/26] block=192.168.34.192/26 handle="k8s-pod-network.a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:20.693022 containerd[1732]: 2025-06-20 19:15:20.664 [INFO][4248] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.193/26] handle="k8s-pod-network.a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:20.693022 containerd[1732]: 2025-06-20 19:15:20.664 [INFO][4248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:20.693022 containerd[1732]: 2025-06-20 19:15:20.664 [INFO][4248] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.193/26] IPv6=[] ContainerID="a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" HandleID="k8s-pod-network.a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-eth0" Jun 20 19:15:20.693174 containerd[1732]: 2025-06-20 19:15:20.667 [INFO][4236] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" Namespace="calico-system" Pod="whisker-d44fd6dcd-m855x" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-eth0", GenerateName:"whisker-d44fd6dcd-", Namespace:"calico-system", SelfLink:"", UID:"6d3b8cf5-01f3-4940-86eb-d2c3235ac970", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d44fd6dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"", Pod:"whisker-d44fd6dcd-m855x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid1afebf4526", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:20.693174 containerd[1732]: 2025-06-20 19:15:20.667 [INFO][4236] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.193/32] ContainerID="a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" Namespace="calico-system" Pod="whisker-d44fd6dcd-m855x" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-eth0" Jun 20 19:15:20.693389 containerd[1732]: 2025-06-20 19:15:20.667 [INFO][4236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1afebf4526 ContainerID="a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" Namespace="calico-system" Pod="whisker-d44fd6dcd-m855x" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-eth0" Jun 20 19:15:20.693389 containerd[1732]: 2025-06-20 19:15:20.675 [INFO][4236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" Namespace="calico-system" Pod="whisker-d44fd6dcd-m855x" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-eth0" Jun 20 19:15:20.693443 containerd[1732]: 2025-06-20 19:15:20.676 [INFO][4236] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" Namespace="calico-system" Pod="whisker-d44fd6dcd-m855x" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-eth0", GenerateName:"whisker-d44fd6dcd-", Namespace:"calico-system", SelfLink:"", UID:"6d3b8cf5-01f3-4940-86eb-d2c3235ac970", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d44fd6dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545", Pod:"whisker-d44fd6dcd-m855x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid1afebf4526", MAC:"c6:d6:61:96:af:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:20.693509 containerd[1732]: 2025-06-20 19:15:20.691 [INFO][4236] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" Namespace="calico-system" Pod="whisker-d44fd6dcd-m855x" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-whisker--d44fd6dcd--m855x-eth0" Jun 20 19:15:20.733151 containerd[1732]: time="2025-06-20T19:15:20.732404935Z" level=info msg="connecting to shim a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545" address="unix:///run/containerd/s/f6c156956437a4786c53e2ccadedd119d873dd07257d97f9621e33652acb8327" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:20.759844 systemd[1]: Started cri-containerd-a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545.scope - libcontainer container a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545. Jun 20 19:15:20.798824 containerd[1732]: time="2025-06-20T19:15:20.798781760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d44fd6dcd-m855x,Uid:6d3b8cf5-01f3-4940-86eb-d2c3235ac970,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545\"" Jun 20 19:15:20.800170 containerd[1732]: time="2025-06-20T19:15:20.800147171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 19:15:21.018661 kubelet[3133]: I0620 19:15:21.018473 3133 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b050fb4-bc24-49ba-b12d-7dd8dadfe712" path="/var/lib/kubelet/pods/5b050fb4-bc24-49ba-b12d-7dd8dadfe712/volumes" Jun 20 19:15:21.528442 containerd[1732]: time="2025-06-20T19:15:21.527960536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbd6ffdf315ecc94ffff356a0813588b735f49e5db96528afce24fd34fb10381\" id:\"59dace6bdc1012db52044d205f59fe874a13095ff254b422f4714c1d3ac7dc5a\" pid:4352 exit_status:1 exited_at:{seconds:1750446921 nanos:527605794}" Jun 20 19:15:21.790080 systemd-networkd[1359]: vxlan.calico: Link UP Jun 20 19:15:21.790089 systemd-networkd[1359]: vxlan.calico: Gained carrier Jun 20 19:15:22.084946 systemd-networkd[1359]: calid1afebf4526: Gained IPv6LL Jun 20 19:15:22.141728 containerd[1732]: time="2025-06-20T19:15:22.141564723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:22.144030 containerd[1732]: time="2025-06-20T19:15:22.143970629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 20 19:15:22.147373 containerd[1732]: time="2025-06-20T19:15:22.147298566Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:22.151373 containerd[1732]: time="2025-06-20T19:15:22.151306980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:22.151805 containerd[1732]: time="2025-06-20T19:15:22.151779984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 1.351466681s" Jun 20 19:15:22.151853 containerd[1732]: time="2025-06-20T19:15:22.151814512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 20 19:15:22.154210 containerd[1732]: time="2025-06-20T19:15:22.154142240Z" level=info msg="CreateContainer within sandbox \"a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 19:15:22.175602 containerd[1732]: time="2025-06-20T19:15:22.174816676Z" level=info msg="Container dfa0f4ec23d08184ccec45c85af778391bde3aeff46b5c4d35ba0ea03458a1be: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:22.182979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4137373452.mount: Deactivated successfully. Jun 20 19:15:22.192901 containerd[1732]: time="2025-06-20T19:15:22.192500911Z" level=info msg="CreateContainer within sandbox \"a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"dfa0f4ec23d08184ccec45c85af778391bde3aeff46b5c4d35ba0ea03458a1be\"" Jun 20 19:15:22.194475 containerd[1732]: time="2025-06-20T19:15:22.194368811Z" level=info msg="StartContainer for \"dfa0f4ec23d08184ccec45c85af778391bde3aeff46b5c4d35ba0ea03458a1be\"" Jun 20 19:15:22.197437 containerd[1732]: time="2025-06-20T19:15:22.197386937Z" level=info msg="connecting to shim dfa0f4ec23d08184ccec45c85af778391bde3aeff46b5c4d35ba0ea03458a1be" address="unix:///run/containerd/s/f6c156956437a4786c53e2ccadedd119d873dd07257d97f9621e33652acb8327" protocol=ttrpc version=3 Jun 20 19:15:22.226044 systemd[1]: Started cri-containerd-dfa0f4ec23d08184ccec45c85af778391bde3aeff46b5c4d35ba0ea03458a1be.scope - libcontainer container dfa0f4ec23d08184ccec45c85af778391bde3aeff46b5c4d35ba0ea03458a1be. Jun 20 19:15:22.289096 containerd[1732]: time="2025-06-20T19:15:22.289045305Z" level=info msg="StartContainer for \"dfa0f4ec23d08184ccec45c85af778391bde3aeff46b5c4d35ba0ea03458a1be\" returns successfully" Jun 20 19:15:22.291340 containerd[1732]: time="2025-06-20T19:15:22.290866525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 19:15:23.172889 systemd-networkd[1359]: vxlan.calico: Gained IPv6LL Jun 20 19:15:24.726929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4140329280.mount: Deactivated successfully. Jun 20 19:15:24.778512 containerd[1732]: time="2025-06-20T19:15:24.778453707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:24.782650 containerd[1732]: time="2025-06-20T19:15:24.782618560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 20 19:15:24.785264 containerd[1732]: time="2025-06-20T19:15:24.785217010Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:24.790077 containerd[1732]: time="2025-06-20T19:15:24.790029550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:24.790553 containerd[1732]: time="2025-06-20T19:15:24.790426186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 2.499515534s" Jun 20 19:15:24.790553 containerd[1732]: time="2025-06-20T19:15:24.790458687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 20 19:15:24.792707 containerd[1732]: time="2025-06-20T19:15:24.792664895Z" level=info msg="CreateContainer within sandbox \"a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 19:15:24.812712 containerd[1732]: time="2025-06-20T19:15:24.811839020Z" level=info msg="Container 944dbc47840f3e17034a0593d81bf34550c57ed6047c75cb2c80770b0e21fc55: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:24.831964 containerd[1732]: time="2025-06-20T19:15:24.831923838Z" level=info msg="CreateContainer within sandbox \"a0f0c3c58a108af3a61fdfc98e98077340fd4659381f58e9e0f0c709893ca545\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"944dbc47840f3e17034a0593d81bf34550c57ed6047c75cb2c80770b0e21fc55\"" Jun 20 19:15:24.832560 containerd[1732]: time="2025-06-20T19:15:24.832505213Z" level=info msg="StartContainer for \"944dbc47840f3e17034a0593d81bf34550c57ed6047c75cb2c80770b0e21fc55\"" Jun 20 19:15:24.835045 containerd[1732]: time="2025-06-20T19:15:24.835009658Z" level=info msg="connecting to shim 944dbc47840f3e17034a0593d81bf34550c57ed6047c75cb2c80770b0e21fc55" address="unix:///run/containerd/s/f6c156956437a4786c53e2ccadedd119d873dd07257d97f9621e33652acb8327" protocol=ttrpc version=3 Jun 20 19:15:24.858881 systemd[1]: Started cri-containerd-944dbc47840f3e17034a0593d81bf34550c57ed6047c75cb2c80770b0e21fc55.scope - libcontainer container 944dbc47840f3e17034a0593d81bf34550c57ed6047c75cb2c80770b0e21fc55. Jun 20 19:15:24.909622 containerd[1732]: time="2025-06-20T19:15:24.909575062Z" level=info msg="StartContainer for \"944dbc47840f3e17034a0593d81bf34550c57ed6047c75cb2c80770b0e21fc55\" returns successfully" Jun 20 19:15:25.188966 kubelet[3133]: I0620 19:15:25.188768 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-d44fd6dcd-m855x" podStartSLOduration=1.197413974 podStartE2EDuration="5.188750968s" podCreationTimestamp="2025-06-20 19:15:20 +0000 UTC" firstStartedPulling="2025-06-20 19:15:20.799929998 +0000 UTC m=+47.924069813" lastFinishedPulling="2025-06-20 19:15:24.791266991 +0000 UTC m=+51.915406807" observedRunningTime="2025-06-20 19:15:25.188412157 +0000 UTC m=+52.312551966" watchObservedRunningTime="2025-06-20 19:15:25.188750968 +0000 UTC m=+52.312890899" Jun 20 19:15:27.017178 containerd[1732]: time="2025-06-20T19:15:27.016941457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gthft,Uid:f6f972ec-3558-420c-8e2b-8fd07b233bae,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:27.017840 containerd[1732]: time="2025-06-20T19:15:27.017773392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-t5qzq,Uid:f70ef370-41e2-4b31-b596-90be9e228851,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:27.018011 containerd[1732]: time="2025-06-20T19:15:27.017788376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mcth6,Uid:f7a4c679-0707-404c-9de0-33337a161752,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:27.184246 systemd-networkd[1359]: calia1583c1f70d: Link UP Jun 20 19:15:27.185725 systemd-networkd[1359]: calia1583c1f70d: Gained carrier Jun 20 19:15:27.201278 containerd[1732]: 2025-06-20 19:15:27.102 [INFO][4610] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-eth0 goldmane-5bd85449d4- calico-system f70ef370-41e2-4b31-b596-90be9e228851 825 0 2025-06-20 19:14:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.0-a-69d2cbc98d goldmane-5bd85449d4-t5qzq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia1583c1f70d [] [] }} ContainerID="82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" Namespace="calico-system" Pod="goldmane-5bd85449d4-t5qzq" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-" Jun 20 19:15:27.201278 containerd[1732]: 2025-06-20 19:15:27.102 [INFO][4610] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" Namespace="calico-system" Pod="goldmane-5bd85449d4-t5qzq" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-eth0" Jun 20 19:15:27.201278 containerd[1732]: 2025-06-20 19:15:27.140 [INFO][4643] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" HandleID="k8s-pod-network.82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-eth0" Jun 20 19:15:27.201780 containerd[1732]: 2025-06-20 19:15:27.140 [INFO][4643] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" HandleID="k8s-pod-network.82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d0ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-69d2cbc98d", "pod":"goldmane-5bd85449d4-t5qzq", "timestamp":"2025-06-20 19:15:27.14002453 +0000 UTC"}, Hostname:"ci-4344.1.0-a-69d2cbc98d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:27.201780 containerd[1732]: 2025-06-20 19:15:27.143 [INFO][4643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:27.201780 containerd[1732]: 2025-06-20 19:15:27.143 [INFO][4643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:27.201780 containerd[1732]: 2025-06-20 19:15:27.143 [INFO][4643] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-69d2cbc98d' Jun 20 19:15:27.201780 containerd[1732]: 2025-06-20 19:15:27.151 [INFO][4643] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.201780 containerd[1732]: 2025-06-20 19:15:27.155 [INFO][4643] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.201780 containerd[1732]: 2025-06-20 19:15:27.158 [INFO][4643] ipam/ipam.go 511: Trying affinity for 192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.201780 containerd[1732]: 2025-06-20 19:15:27.159 [INFO][4643] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.201780 containerd[1732]: 2025-06-20 19:15:27.161 [INFO][4643] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.202005 containerd[1732]: 2025-06-20 19:15:27.161 [INFO][4643] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.202005 containerd[1732]: 2025-06-20 19:15:27.162 [INFO][4643] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4 Jun 20 19:15:27.202005 containerd[1732]: 2025-06-20 19:15:27.171 [INFO][4643] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.202005 containerd[1732]: 2025-06-20 19:15:27.175 [INFO][4643] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.194/26] block=192.168.34.192/26 handle="k8s-pod-network.82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.202005 containerd[1732]: 2025-06-20 19:15:27.175 [INFO][4643] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.194/26] handle="k8s-pod-network.82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.202005 containerd[1732]: 2025-06-20 19:15:27.176 [INFO][4643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:27.202005 containerd[1732]: 2025-06-20 19:15:27.176 [INFO][4643] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.194/26] IPv6=[] ContainerID="82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" HandleID="k8s-pod-network.82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-eth0" Jun 20 19:15:27.202197 containerd[1732]: 2025-06-20 19:15:27.178 [INFO][4610] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" Namespace="calico-system" Pod="goldmane-5bd85449d4-t5qzq" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"f70ef370-41e2-4b31-b596-90be9e228851", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"", Pod:"goldmane-5bd85449d4-t5qzq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia1583c1f70d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:27.202197 containerd[1732]: 2025-06-20 19:15:27.178 [INFO][4610] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.194/32] ContainerID="82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" Namespace="calico-system" Pod="goldmane-5bd85449d4-t5qzq" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-eth0" Jun 20 19:15:27.202297 containerd[1732]: 2025-06-20 19:15:27.178 [INFO][4610] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1583c1f70d ContainerID="82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" Namespace="calico-system" Pod="goldmane-5bd85449d4-t5qzq" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-eth0" Jun 20 19:15:27.202297 containerd[1732]: 2025-06-20 19:15:27.184 [INFO][4610] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" Namespace="calico-system" Pod="goldmane-5bd85449d4-t5qzq" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-eth0" Jun 20 19:15:27.203199 containerd[1732]: 2025-06-20 19:15:27.186 [INFO][4610] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" Namespace="calico-system" Pod="goldmane-5bd85449d4-t5qzq" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"f70ef370-41e2-4b31-b596-90be9e228851", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4", Pod:"goldmane-5bd85449d4-t5qzq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia1583c1f70d", MAC:"42:c9:28:9e:2a:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:27.203288 containerd[1732]: 2025-06-20 19:15:27.199 [INFO][4610] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" Namespace="calico-system" Pod="goldmane-5bd85449d4-t5qzq" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-goldmane--5bd85449d4--t5qzq-eth0" Jun 20 19:15:27.255720 containerd[1732]: time="2025-06-20T19:15:27.254885394Z" level=info msg="connecting to shim 82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4" address="unix:///run/containerd/s/f56d04e53b189088e3805f5af36cf87910283d4c8b5efe5c505d9fcbc1cacc26" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:27.291842 systemd[1]: Started cri-containerd-82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4.scope - libcontainer container 82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4. Jun 20 19:15:27.318520 systemd-networkd[1359]: cali011a1a03fdb: Link UP Jun 20 19:15:27.322949 systemd-networkd[1359]: cali011a1a03fdb: Gained carrier Jun 20 19:15:27.350273 containerd[1732]: 2025-06-20 19:15:27.093 [INFO][4601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-eth0 coredns-668d6bf9bc- kube-system f6f972ec-3558-420c-8e2b-8fd07b233bae 814 0 2025-06-20 19:14:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.0-a-69d2cbc98d coredns-668d6bf9bc-gthft eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali011a1a03fdb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-gthft" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-" Jun 20 19:15:27.350273 containerd[1732]: 2025-06-20 19:15:27.094 [INFO][4601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-gthft" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-eth0" Jun 20 19:15:27.350273 containerd[1732]: 2025-06-20 19:15:27.148 [INFO][4637] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" HandleID="k8s-pod-network.55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-eth0" Jun 20 19:15:27.350591 containerd[1732]: 2025-06-20 19:15:27.149 [INFO][4637] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" HandleID="k8s-pod-network.55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.0-a-69d2cbc98d", "pod":"coredns-668d6bf9bc-gthft", "timestamp":"2025-06-20 19:15:27.147347745 +0000 UTC"}, Hostname:"ci-4344.1.0-a-69d2cbc98d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:27.350591 containerd[1732]: 2025-06-20 19:15:27.149 [INFO][4637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:27.350591 containerd[1732]: 2025-06-20 19:15:27.176 [INFO][4637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:27.350591 containerd[1732]: 2025-06-20 19:15:27.176 [INFO][4637] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-69d2cbc98d' Jun 20 19:15:27.350591 containerd[1732]: 2025-06-20 19:15:27.253 [INFO][4637] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.350591 containerd[1732]: 2025-06-20 19:15:27.262 [INFO][4637] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.350591 containerd[1732]: 2025-06-20 19:15:27.276 [INFO][4637] ipam/ipam.go 511: Trying affinity for 192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.350591 containerd[1732]: 2025-06-20 19:15:27.281 [INFO][4637] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.350591 containerd[1732]: 2025-06-20 19:15:27.286 [INFO][4637] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.350836 containerd[1732]: 2025-06-20 19:15:27.286 [INFO][4637] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.350836 containerd[1732]: 2025-06-20 19:15:27.293 [INFO][4637] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29 Jun 20 19:15:27.350836 containerd[1732]: 2025-06-20 19:15:27.298 [INFO][4637] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.350836 containerd[1732]: 2025-06-20 19:15:27.308 [INFO][4637] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.195/26] block=192.168.34.192/26 handle="k8s-pod-network.55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.350836 containerd[1732]: 2025-06-20 19:15:27.308 [INFO][4637] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.195/26] handle="k8s-pod-network.55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.350836 containerd[1732]: 2025-06-20 19:15:27.308 [INFO][4637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:27.350836 containerd[1732]: 2025-06-20 19:15:27.308 [INFO][4637] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.195/26] IPv6=[] ContainerID="55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" HandleID="k8s-pod-network.55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-eth0" Jun 20 19:15:27.350987 containerd[1732]: 2025-06-20 19:15:27.310 [INFO][4601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-gthft" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6f972ec-3558-420c-8e2b-8fd07b233bae", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"", Pod:"coredns-668d6bf9bc-gthft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali011a1a03fdb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:27.350987 containerd[1732]: 2025-06-20 19:15:27.310 [INFO][4601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.195/32] ContainerID="55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-gthft" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-eth0" Jun 20 19:15:27.350987 containerd[1732]: 2025-06-20 19:15:27.310 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali011a1a03fdb ContainerID="55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-gthft" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-eth0" Jun 20 19:15:27.350987 containerd[1732]: 2025-06-20 19:15:27.326 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-gthft" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-eth0" Jun 20 19:15:27.350987 containerd[1732]: 2025-06-20 19:15:27.327 [INFO][4601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-gthft" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6f972ec-3558-420c-8e2b-8fd07b233bae", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29", Pod:"coredns-668d6bf9bc-gthft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali011a1a03fdb", MAC:"26:93:a5:72:36:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:27.350987 containerd[1732]: 2025-06-20 19:15:27.345 [INFO][4601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-gthft" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--gthft-eth0" Jun 20 19:15:27.363853 containerd[1732]: time="2025-06-20T19:15:27.363750567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-t5qzq,Uid:f70ef370-41e2-4b31-b596-90be9e228851,Namespace:calico-system,Attempt:0,} returns sandbox id \"82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4\"" Jun 20 19:15:27.365824 containerd[1732]: time="2025-06-20T19:15:27.365794494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 19:15:27.398689 systemd-networkd[1359]: cali62f51b8227e: Link UP Jun 20 19:15:27.399653 systemd-networkd[1359]: cali62f51b8227e: Gained carrier Jun 20 19:15:27.412131 containerd[1732]: time="2025-06-20T19:15:27.412078056Z" level=info msg="connecting to shim 55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29" address="unix:///run/containerd/s/991ab8ec0526d7d02c1c01ad82bf5eed5af0b87a6fa29fcca61c45d534693a52" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.102 [INFO][4620] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-eth0 coredns-668d6bf9bc- kube-system f7a4c679-0707-404c-9de0-33337a161752 817 0 2025-06-20 19:14:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.0-a-69d2cbc98d coredns-668d6bf9bc-mcth6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali62f51b8227e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcth6" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-" Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.103 [INFO][4620] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcth6" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-eth0" Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.152 [INFO][4644] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" HandleID="k8s-pod-network.3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-eth0" Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.152 [INFO][4644] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" HandleID="k8s-pod-network.3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.0-a-69d2cbc98d", "pod":"coredns-668d6bf9bc-mcth6", "timestamp":"2025-06-20 19:15:27.152456417 +0000 UTC"}, Hostname:"ci-4344.1.0-a-69d2cbc98d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.152 [INFO][4644] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.308 [INFO][4644] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.308 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-69d2cbc98d' Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.353 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.362 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.374 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.377 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.379 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.379 [INFO][4644] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.380 [INFO][4644] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82 Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.385 [INFO][4644] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.393 [INFO][4644] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.196/26] block=192.168.34.192/26 handle="k8s-pod-network.3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.394 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.196/26] handle="k8s-pod-network.3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.394 [INFO][4644] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:27.423408 containerd[1732]: 2025-06-20 19:15:27.394 [INFO][4644] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.196/26] IPv6=[] ContainerID="3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" HandleID="k8s-pod-network.3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-eth0" Jun 20 19:15:27.424021 containerd[1732]: 2025-06-20 19:15:27.395 [INFO][4620] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcth6" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f7a4c679-0707-404c-9de0-33337a161752", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"", Pod:"coredns-668d6bf9bc-mcth6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62f51b8227e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:27.424021 containerd[1732]: 2025-06-20 19:15:27.395 [INFO][4620] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.196/32] ContainerID="3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcth6" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-eth0" Jun 20 19:15:27.424021 containerd[1732]: 2025-06-20 19:15:27.395 [INFO][4620] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62f51b8227e ContainerID="3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcth6" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-eth0" Jun 20 19:15:27.424021 containerd[1732]: 2025-06-20 19:15:27.399 [INFO][4620] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcth6" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-eth0" Jun 20 19:15:27.424021 containerd[1732]: 2025-06-20 19:15:27.400 [INFO][4620] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcth6" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f7a4c679-0707-404c-9de0-33337a161752", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82", Pod:"coredns-668d6bf9bc-mcth6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62f51b8227e", MAC:"e2:12:95:73:25:67", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:27.424021 containerd[1732]: 2025-06-20 19:15:27.418 [INFO][4620] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcth6" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-coredns--668d6bf9bc--mcth6-eth0" Jun 20 19:15:27.444088 systemd[1]: Started cri-containerd-55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29.scope - libcontainer container 55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29. Jun 20 19:15:27.463032 containerd[1732]: time="2025-06-20T19:15:27.462831337Z" level=info msg="connecting to shim 3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82" address="unix:///run/containerd/s/f146b5f0dda35477e6c7060a3680ca6a3809441a6760f6e6c95246ded7143c64" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:27.486919 systemd[1]: Started cri-containerd-3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82.scope - libcontainer container 3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82. Jun 20 19:15:27.500625 containerd[1732]: time="2025-06-20T19:15:27.500549374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gthft,Uid:f6f972ec-3558-420c-8e2b-8fd07b233bae,Namespace:kube-system,Attempt:0,} returns sandbox id \"55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29\"" Jun 20 19:15:27.505088 containerd[1732]: time="2025-06-20T19:15:27.505059173Z" level=info msg="CreateContainer within sandbox \"55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:15:27.527648 containerd[1732]: time="2025-06-20T19:15:27.527604183Z" level=info msg="Container 30cb8825d063e06351ea2287970cc7a3906f6e50030a6d726ed054052b569646: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:27.538478 containerd[1732]: time="2025-06-20T19:15:27.538403153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mcth6,Uid:f7a4c679-0707-404c-9de0-33337a161752,Namespace:kube-system,Attempt:0,} returns sandbox id \"3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82\"" Jun 20 19:15:27.541036 containerd[1732]: time="2025-06-20T19:15:27.541007738Z" level=info msg="CreateContainer within sandbox \"3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:15:27.553002 containerd[1732]: time="2025-06-20T19:15:27.552856249Z" level=info msg="CreateContainer within sandbox \"55d55bbd8fdb4b3712295c171aca9a2139896fce3e44e6dbfb27856de6627c29\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30cb8825d063e06351ea2287970cc7a3906f6e50030a6d726ed054052b569646\"" Jun 20 19:15:27.553515 containerd[1732]: time="2025-06-20T19:15:27.553466887Z" level=info msg="StartContainer for \"30cb8825d063e06351ea2287970cc7a3906f6e50030a6d726ed054052b569646\"" Jun 20 19:15:27.554553 containerd[1732]: time="2025-06-20T19:15:27.554523145Z" level=info msg="connecting to shim 30cb8825d063e06351ea2287970cc7a3906f6e50030a6d726ed054052b569646" address="unix:///run/containerd/s/991ab8ec0526d7d02c1c01ad82bf5eed5af0b87a6fa29fcca61c45d534693a52" protocol=ttrpc version=3 Jun 20 19:15:27.562302 containerd[1732]: time="2025-06-20T19:15:27.562273187Z" level=info msg="Container d321b8303ca98ebd13451f5344c2f557d25a5d92035f5155cc4591c8c0d59c0b: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:27.571897 systemd[1]: Started cri-containerd-30cb8825d063e06351ea2287970cc7a3906f6e50030a6d726ed054052b569646.scope - libcontainer container 30cb8825d063e06351ea2287970cc7a3906f6e50030a6d726ed054052b569646. Jun 20 19:15:27.577030 containerd[1732]: time="2025-06-20T19:15:27.576957900Z" level=info msg="CreateContainer within sandbox \"3eb9b9466a75d13fc6a1cc0ebedf6fee95fb72733b297fa8fda2f0ba92a80d82\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d321b8303ca98ebd13451f5344c2f557d25a5d92035f5155cc4591c8c0d59c0b\"" Jun 20 19:15:27.577385 containerd[1732]: time="2025-06-20T19:15:27.577362377Z" level=info msg="StartContainer for \"d321b8303ca98ebd13451f5344c2f557d25a5d92035f5155cc4591c8c0d59c0b\"" Jun 20 19:15:27.578641 containerd[1732]: time="2025-06-20T19:15:27.578594306Z" level=info msg="connecting to shim d321b8303ca98ebd13451f5344c2f557d25a5d92035f5155cc4591c8c0d59c0b" address="unix:///run/containerd/s/f146b5f0dda35477e6c7060a3680ca6a3809441a6760f6e6c95246ded7143c64" protocol=ttrpc version=3 Jun 20 19:15:27.599997 systemd[1]: Started cri-containerd-d321b8303ca98ebd13451f5344c2f557d25a5d92035f5155cc4591c8c0d59c0b.scope - libcontainer container d321b8303ca98ebd13451f5344c2f557d25a5d92035f5155cc4591c8c0d59c0b. Jun 20 19:15:27.613373 containerd[1732]: time="2025-06-20T19:15:27.613336535Z" level=info msg="StartContainer for \"30cb8825d063e06351ea2287970cc7a3906f6e50030a6d726ed054052b569646\" returns successfully" Jun 20 19:15:27.646622 containerd[1732]: time="2025-06-20T19:15:27.646577902Z" level=info msg="StartContainer for \"d321b8303ca98ebd13451f5344c2f557d25a5d92035f5155cc4591c8c0d59c0b\" returns successfully" Jun 20 19:15:28.016258 containerd[1732]: time="2025-06-20T19:15:28.016089174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bc98767b-z6f9b,Uid:643a3c1d-c11b-429d-af2c-62cba85afc5a,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:15:28.016601 containerd[1732]: time="2025-06-20T19:15:28.016089188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6595c785f9-rstkf,Uid:529edae3-7d6f-495a-99af-8dcab5ab6f83,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:28.156203 systemd-networkd[1359]: calif2714d97dfe: Link UP Jun 20 19:15:28.157595 systemd-networkd[1359]: calif2714d97dfe: Gained carrier Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.087 [INFO][4901] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-eth0 calico-kube-controllers-6595c785f9- calico-system 529edae3-7d6f-495a-99af-8dcab5ab6f83 824 0 2025-06-20 19:14:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6595c785f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.0-a-69d2cbc98d calico-kube-controllers-6595c785f9-rstkf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif2714d97dfe [] [] }} ContainerID="ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" Namespace="calico-system" Pod="calico-kube-controllers-6595c785f9-rstkf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-" Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.088 [INFO][4901] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" Namespace="calico-system" Pod="calico-kube-controllers-6595c785f9-rstkf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-eth0" Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.118 [INFO][4926] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" HandleID="k8s-pod-network.ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-eth0" Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.118 [INFO][4926] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" HandleID="k8s-pod-network.ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-69d2cbc98d", "pod":"calico-kube-controllers-6595c785f9-rstkf", "timestamp":"2025-06-20 19:15:28.11815413 +0000 UTC"}, Hostname:"ci-4344.1.0-a-69d2cbc98d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.118 [INFO][4926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.118 [INFO][4926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.118 [INFO][4926] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-69d2cbc98d' Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.125 [INFO][4926] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.128 [INFO][4926] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.132 [INFO][4926] ipam/ipam.go 511: Trying affinity for 192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.133 [INFO][4926] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.135 [INFO][4926] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.135 [INFO][4926] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.137 [INFO][4926] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279 Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.144 [INFO][4926] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.149 [INFO][4926] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.197/26] block=192.168.34.192/26 handle="k8s-pod-network.ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.149 [INFO][4926] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.197/26] handle="k8s-pod-network.ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.149 [INFO][4926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:28.174429 containerd[1732]: 2025-06-20 19:15:28.149 [INFO][4926] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.197/26] IPv6=[] ContainerID="ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" HandleID="k8s-pod-network.ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-eth0" Jun 20 19:15:28.175569 containerd[1732]: 2025-06-20 19:15:28.151 [INFO][4901] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" Namespace="calico-system" Pod="calico-kube-controllers-6595c785f9-rstkf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-eth0", GenerateName:"calico-kube-controllers-6595c785f9-", Namespace:"calico-system", SelfLink:"", UID:"529edae3-7d6f-495a-99af-8dcab5ab6f83", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6595c785f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"", Pod:"calico-kube-controllers-6595c785f9-rstkf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2714d97dfe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:28.175569 containerd[1732]: 2025-06-20 19:15:28.151 [INFO][4901] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.197/32] ContainerID="ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" Namespace="calico-system" Pod="calico-kube-controllers-6595c785f9-rstkf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-eth0" Jun 20 19:15:28.175569 containerd[1732]: 2025-06-20 19:15:28.151 [INFO][4901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2714d97dfe ContainerID="ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" Namespace="calico-system" Pod="calico-kube-controllers-6595c785f9-rstkf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-eth0" Jun 20 19:15:28.175569 containerd[1732]: 2025-06-20 19:15:28.159 [INFO][4901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" Namespace="calico-system" Pod="calico-kube-controllers-6595c785f9-rstkf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-eth0" Jun 20 19:15:28.175569 containerd[1732]: 2025-06-20 19:15:28.159 [INFO][4901] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" Namespace="calico-system" Pod="calico-kube-controllers-6595c785f9-rstkf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-eth0", GenerateName:"calico-kube-controllers-6595c785f9-", Namespace:"calico-system", SelfLink:"", UID:"529edae3-7d6f-495a-99af-8dcab5ab6f83", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6595c785f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279", Pod:"calico-kube-controllers-6595c785f9-rstkf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2714d97dfe", MAC:"86:dc:4f:53:c8:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:28.175569 containerd[1732]: 2025-06-20 19:15:28.171 [INFO][4901] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" Namespace="calico-system" Pod="calico-kube-controllers-6595c785f9-rstkf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--kube--controllers--6595c785f9--rstkf-eth0" Jun 20 19:15:28.209254 kubelet[3133]: I0620 19:15:28.209192 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mcth6" podStartSLOduration=50.209171318 podStartE2EDuration="50.209171318s" podCreationTimestamp="2025-06-20 19:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:15:28.207998784 +0000 UTC m=+55.332138600" watchObservedRunningTime="2025-06-20 19:15:28.209171318 +0000 UTC m=+55.333311137" Jun 20 19:15:28.237770 kubelet[3133]: I0620 19:15:28.237692 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gthft" podStartSLOduration=50.237671888 podStartE2EDuration="50.237671888s" podCreationTimestamp="2025-06-20 19:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:15:28.236231 +0000 UTC m=+55.360370813" watchObservedRunningTime="2025-06-20 19:15:28.237671888 +0000 UTC m=+55.361811702" Jun 20 19:15:28.242484 containerd[1732]: time="2025-06-20T19:15:28.242430818Z" level=info msg="connecting to shim ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279" address="unix:///run/containerd/s/6b1ec3d68f64aeac73867ae97e9ce77ed2ccf5bdca2c72f6ba89c3bddfd75dc9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:28.282033 systemd[1]: Started cri-containerd-ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279.scope - libcontainer container ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279. Jun 20 19:15:28.328052 systemd-networkd[1359]: cali61b004fc6e2: Link UP Jun 20 19:15:28.329098 systemd-networkd[1359]: cali61b004fc6e2: Gained carrier Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.087 [INFO][4900] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-eth0 calico-apiserver-56bc98767b- calico-apiserver 643a3c1d-c11b-429d-af2c-62cba85afc5a 823 0 2025-06-20 19:14:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56bc98767b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-69d2cbc98d calico-apiserver-56bc98767b-z6f9b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali61b004fc6e2 [] [] }} ContainerID="6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-z6f9b" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-" Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.087 [INFO][4900] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-z6f9b" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-eth0" Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.118 [INFO][4924] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" HandleID="k8s-pod-network.6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-eth0" Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.118 [INFO][4924] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" HandleID="k8s-pod-network.6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-69d2cbc98d", "pod":"calico-apiserver-56bc98767b-z6f9b", "timestamp":"2025-06-20 19:15:28.118837018 +0000 UTC"}, Hostname:"ci-4344.1.0-a-69d2cbc98d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.119 [INFO][4924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.149 [INFO][4924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.149 [INFO][4924] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-69d2cbc98d' Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.230 [INFO][4924] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.252 [INFO][4924] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.289 [INFO][4924] ipam/ipam.go 511: Trying affinity for 192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.293 [INFO][4924] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.298 [INFO][4924] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.299 [INFO][4924] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.303 [INFO][4924] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132 Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.312 [INFO][4924] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.319 [INFO][4924] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.198/26] block=192.168.34.192/26 handle="k8s-pod-network.6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.319 [INFO][4924] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.198/26] handle="k8s-pod-network.6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.319 [INFO][4924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:28.344057 containerd[1732]: 2025-06-20 19:15:28.319 [INFO][4924] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.198/26] IPv6=[] ContainerID="6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" HandleID="k8s-pod-network.6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-eth0" Jun 20 19:15:28.344622 containerd[1732]: 2025-06-20 19:15:28.324 [INFO][4900] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-z6f9b" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-eth0", GenerateName:"calico-apiserver-56bc98767b-", Namespace:"calico-apiserver", SelfLink:"", UID:"643a3c1d-c11b-429d-af2c-62cba85afc5a", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bc98767b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"", Pod:"calico-apiserver-56bc98767b-z6f9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61b004fc6e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:28.344622 containerd[1732]: 2025-06-20 19:15:28.324 [INFO][4900] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.198/32] ContainerID="6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-z6f9b" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-eth0" Jun 20 19:15:28.344622 containerd[1732]: 2025-06-20 19:15:28.325 [INFO][4900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61b004fc6e2 ContainerID="6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-z6f9b" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-eth0" Jun 20 19:15:28.344622 containerd[1732]: 2025-06-20 19:15:28.328 [INFO][4900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-z6f9b" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-eth0" Jun 20 19:15:28.344622 containerd[1732]: 2025-06-20 19:15:28.328 [INFO][4900] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-z6f9b" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-eth0", GenerateName:"calico-apiserver-56bc98767b-", Namespace:"calico-apiserver", SelfLink:"", UID:"643a3c1d-c11b-429d-af2c-62cba85afc5a", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bc98767b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132", Pod:"calico-apiserver-56bc98767b-z6f9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61b004fc6e2", MAC:"b2:90:cb:d9:69:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:28.344622 containerd[1732]: 2025-06-20 19:15:28.340 [INFO][4900] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-z6f9b" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--z6f9b-eth0" Jun 20 19:15:28.390205 containerd[1732]: time="2025-06-20T19:15:28.390129948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6595c785f9-rstkf,Uid:529edae3-7d6f-495a-99af-8dcab5ab6f83,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279\"" Jun 20 19:15:28.397707 containerd[1732]: time="2025-06-20T19:15:28.397619281Z" level=info msg="connecting to shim 6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132" address="unix:///run/containerd/s/fa8baf06413139d8bccf357e0e837de457fbd021b83a9f09f2e7c9fe75982540" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:28.422874 systemd[1]: Started cri-containerd-6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132.scope - libcontainer container 6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132. Jun 20 19:15:28.466886 containerd[1732]: time="2025-06-20T19:15:28.466844852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bc98767b-z6f9b,Uid:643a3c1d-c11b-429d-af2c-62cba85afc5a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132\"" Jun 20 19:15:28.676934 systemd-networkd[1359]: calia1583c1f70d: Gained IPv6LL Jun 20 19:15:29.019733 containerd[1732]: time="2025-06-20T19:15:29.019309394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bc98767b-dgvxb,Uid:1ea98c83-5bbf-4665-a250-04d974d93140,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:15:29.060920 systemd-networkd[1359]: cali011a1a03fdb: Gained IPv6LL Jun 20 19:15:29.124964 systemd-networkd[1359]: cali62f51b8227e: Gained IPv6LL Jun 20 19:15:29.206230 systemd-networkd[1359]: cali8b2fd87af76: Link UP Jun 20 19:15:29.207361 systemd-networkd[1359]: cali8b2fd87af76: Gained carrier Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.097 [INFO][5057] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-eth0 calico-apiserver-56bc98767b- calico-apiserver 1ea98c83-5bbf-4665-a250-04d974d93140 822 0 2025-06-20 19:14:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56bc98767b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-69d2cbc98d calico-apiserver-56bc98767b-dgvxb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8b2fd87af76 [] [] }} ContainerID="394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-dgvxb" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-" Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.097 [INFO][5057] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-dgvxb" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-eth0" Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.141 [INFO][5069] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" HandleID="k8s-pod-network.394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-eth0" Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.142 [INFO][5069] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" HandleID="k8s-pod-network.394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd2a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-69d2cbc98d", "pod":"calico-apiserver-56bc98767b-dgvxb", "timestamp":"2025-06-20 19:15:29.14193168 +0000 UTC"}, Hostname:"ci-4344.1.0-a-69d2cbc98d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.142 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.142 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.142 [INFO][5069] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-69d2cbc98d' Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.155 [INFO][5069] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.163 [INFO][5069] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.170 [INFO][5069] ipam/ipam.go 511: Trying affinity for 192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.173 [INFO][5069] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.176 [INFO][5069] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.176 [INFO][5069] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.178 [INFO][5069] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9 Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.184 [INFO][5069] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.197 [INFO][5069] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.199/26] block=192.168.34.192/26 handle="k8s-pod-network.394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.197 [INFO][5069] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.199/26] handle="k8s-pod-network.394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.197 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:29.234086 containerd[1732]: 2025-06-20 19:15:29.197 [INFO][5069] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.199/26] IPv6=[] ContainerID="394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" HandleID="k8s-pod-network.394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-eth0" Jun 20 19:15:29.235298 containerd[1732]: 2025-06-20 19:15:29.199 [INFO][5057] cni-plugin/k8s.go 418: Populated endpoint ContainerID="394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-dgvxb" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-eth0", GenerateName:"calico-apiserver-56bc98767b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ea98c83-5bbf-4665-a250-04d974d93140", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bc98767b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"", Pod:"calico-apiserver-56bc98767b-dgvxb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b2fd87af76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:29.235298 containerd[1732]: 2025-06-20 19:15:29.199 [INFO][5057] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.199/32] ContainerID="394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-dgvxb" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-eth0" Jun 20 19:15:29.235298 containerd[1732]: 2025-06-20 19:15:29.200 [INFO][5057] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b2fd87af76 ContainerID="394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-dgvxb" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-eth0" Jun 20 19:15:29.235298 containerd[1732]: 2025-06-20 19:15:29.209 [INFO][5057] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-dgvxb" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-eth0" Jun 20 19:15:29.235298 containerd[1732]: 2025-06-20 19:15:29.211 [INFO][5057] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-dgvxb" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-eth0", GenerateName:"calico-apiserver-56bc98767b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ea98c83-5bbf-4665-a250-04d974d93140", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bc98767b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9", Pod:"calico-apiserver-56bc98767b-dgvxb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b2fd87af76", MAC:"6a:7a:dd:1b:9f:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:29.235298 containerd[1732]: 2025-06-20 19:15:29.230 [INFO][5057] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" Namespace="calico-apiserver" Pod="calico-apiserver-56bc98767b-dgvxb" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-calico--apiserver--56bc98767b--dgvxb-eth0" Jun 20 19:15:29.298543 containerd[1732]: time="2025-06-20T19:15:29.298496604Z" level=info msg="connecting to shim 394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9" address="unix:///run/containerd/s/b20ca275234c741586a6ff8ef8c13d6f7d2cee6054082da36e4141290614b7e3" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:29.317040 systemd-networkd[1359]: calif2714d97dfe: Gained IPv6LL Jun 20 19:15:29.332879 systemd[1]: Started cri-containerd-394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9.scope - libcontainer container 394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9. Jun 20 19:15:29.435406 containerd[1732]: time="2025-06-20T19:15:29.435366623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bc98767b-dgvxb,Uid:1ea98c83-5bbf-4665-a250-04d974d93140,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9\"" Jun 20 19:15:29.529227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1859011702.mount: Deactivated successfully. Jun 20 19:15:29.956903 systemd-networkd[1359]: cali61b004fc6e2: Gained IPv6LL Jun 20 19:15:29.998241 containerd[1732]: time="2025-06-20T19:15:29.998194464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:30.000726 containerd[1732]: time="2025-06-20T19:15:30.000684529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 20 19:15:30.003623 containerd[1732]: time="2025-06-20T19:15:30.003571276Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:30.010339 containerd[1732]: time="2025-06-20T19:15:30.010273988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:30.010908 containerd[1732]: time="2025-06-20T19:15:30.010779553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 2.644777665s" Jun 20 19:15:30.010908 containerd[1732]: time="2025-06-20T19:15:30.010811680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 20 19:15:30.012454 containerd[1732]: time="2025-06-20T19:15:30.012400464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 19:15:30.013562 containerd[1732]: time="2025-06-20T19:15:30.013446861Z" level=info msg="CreateContainer within sandbox \"82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 19:15:30.034724 containerd[1732]: time="2025-06-20T19:15:30.034253810Z" level=info msg="Container e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:30.035892 containerd[1732]: time="2025-06-20T19:15:30.034261095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcgqf,Uid:7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:30.069507 containerd[1732]: time="2025-06-20T19:15:30.068764923Z" level=info msg="CreateContainer within sandbox \"82595797fdd1ca74e823b95167270e4765366020a651f29730af0bdcf5c61cd4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387\"" Jun 20 19:15:30.069923 containerd[1732]: time="2025-06-20T19:15:30.069892014Z" level=info msg="StartContainer for \"e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387\"" Jun 20 19:15:30.072722 containerd[1732]: time="2025-06-20T19:15:30.072674678Z" level=info msg="connecting to shim e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387" address="unix:///run/containerd/s/f56d04e53b189088e3805f5af36cf87910283d4c8b5efe5c505d9fcbc1cacc26" protocol=ttrpc version=3 Jun 20 19:15:30.102013 systemd[1]: Started cri-containerd-e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387.scope - libcontainer container e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387. Jun 20 19:15:30.172301 containerd[1732]: time="2025-06-20T19:15:30.172257845Z" level=info msg="StartContainer for \"e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387\" returns successfully" Jun 20 19:15:30.183472 systemd-networkd[1359]: cali0b042c743ca: Link UP Jun 20 19:15:30.184923 systemd-networkd[1359]: cali0b042c743ca: Gained carrier Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.101 [INFO][5141] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-eth0 csi-node-driver- calico-system 7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0 680 0 2025-06-20 19:14:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.0-a-69d2cbc98d csi-node-driver-tcgqf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0b042c743ca [] [] }} ContainerID="d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" Namespace="calico-system" Pod="csi-node-driver-tcgqf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-" Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.101 [INFO][5141] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" Namespace="calico-system" Pod="csi-node-driver-tcgqf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-eth0" Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.134 [INFO][5169] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" HandleID="k8s-pod-network.d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-eth0" Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.134 [INFO][5169] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" HandleID="k8s-pod-network.d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-69d2cbc98d", "pod":"csi-node-driver-tcgqf", "timestamp":"2025-06-20 19:15:30.134069501 +0000 UTC"}, Hostname:"ci-4344.1.0-a-69d2cbc98d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.134 [INFO][5169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.134 [INFO][5169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.134 [INFO][5169] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-69d2cbc98d' Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.140 [INFO][5169] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.145 [INFO][5169] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.151 [INFO][5169] ipam/ipam.go 511: Trying affinity for 192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.153 [INFO][5169] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.156 [INFO][5169] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.156 [INFO][5169] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.157 [INFO][5169] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.165 [INFO][5169] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.177 [INFO][5169] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.200/26] block=192.168.34.192/26 handle="k8s-pod-network.d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.177 [INFO][5169] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.200/26] handle="k8s-pod-network.d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" host="ci-4344.1.0-a-69d2cbc98d" Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.177 [INFO][5169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:30.213007 containerd[1732]: 2025-06-20 19:15:30.177 [INFO][5169] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.200/26] IPv6=[] ContainerID="d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" HandleID="k8s-pod-network.d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" Workload="ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-eth0" Jun 20 19:15:30.213590 containerd[1732]: 2025-06-20 19:15:30.180 [INFO][5141] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" Namespace="calico-system" Pod="csi-node-driver-tcgqf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"", Pod:"csi-node-driver-tcgqf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b042c743ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:30.213590 containerd[1732]: 2025-06-20 19:15:30.180 [INFO][5141] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.200/32] ContainerID="d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" Namespace="calico-system" Pod="csi-node-driver-tcgqf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-eth0" Jun 20 19:15:30.213590 containerd[1732]: 2025-06-20 19:15:30.180 [INFO][5141] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b042c743ca ContainerID="d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" Namespace="calico-system" Pod="csi-node-driver-tcgqf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-eth0" Jun 20 19:15:30.213590 containerd[1732]: 2025-06-20 19:15:30.185 [INFO][5141] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" Namespace="calico-system" Pod="csi-node-driver-tcgqf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-eth0" Jun 20 19:15:30.213590 containerd[1732]: 2025-06-20 19:15:30.186 [INFO][5141] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" Namespace="calico-system" Pod="csi-node-driver-tcgqf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-69d2cbc98d", ContainerID:"d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b", Pod:"csi-node-driver-tcgqf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b042c743ca", MAC:"e6:70:8d:e5:4d:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:30.213590 containerd[1732]: 2025-06-20 19:15:30.205 [INFO][5141] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" Namespace="calico-system" Pod="csi-node-driver-tcgqf" WorkloadEndpoint="ci--4344.1.0--a--69d2cbc98d-k8s-csi--node--driver--tcgqf-eth0" Jun 20 19:15:30.303487 containerd[1732]: time="2025-06-20T19:15:30.303395257Z" level=info msg="connecting to shim d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b" address="unix:///run/containerd/s/c50464dd9be3d5d4752501d7088378fb552cd9f036e6b9f3f482a16bb921d673" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:30.331876 systemd[1]: Started cri-containerd-d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b.scope - libcontainer container d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b. Jun 20 19:15:30.360737 containerd[1732]: time="2025-06-20T19:15:30.360674187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387\" id:\"3f243f1ba6b6679c2fc24ee3e64bb1272241354708954b3404d9682dcd10ca62\" pid:5217 exited_at:{seconds:1750446930 nanos:360350859}" Jun 20 19:15:30.370313 containerd[1732]: time="2025-06-20T19:15:30.370231343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcgqf,Uid:7afa5fe6-9ec0-45d7-b74a-d76e477ad3c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b\"" Jun 20 19:15:30.379565 kubelet[3133]: I0620 19:15:30.379507 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-t5qzq" podStartSLOduration=38.733184588 podStartE2EDuration="41.379487033s" podCreationTimestamp="2025-06-20 19:14:49 +0000 UTC" firstStartedPulling="2025-06-20 19:15:27.365507717 +0000 UTC m=+54.489647530" lastFinishedPulling="2025-06-20 19:15:30.011810161 +0000 UTC m=+57.135949975" observedRunningTime="2025-06-20 19:15:30.246259657 +0000 UTC m=+57.370399486" watchObservedRunningTime="2025-06-20 19:15:30.379487033 +0000 UTC m=+57.503626846" Jun 20 19:15:31.108855 systemd-networkd[1359]: cali8b2fd87af76: Gained IPv6LL Jun 20 19:15:31.940876 systemd-networkd[1359]: cali0b042c743ca: Gained IPv6LL Jun 20 19:15:33.209266 containerd[1732]: time="2025-06-20T19:15:33.209218324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:33.214106 containerd[1732]: time="2025-06-20T19:15:33.214045045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 20 19:15:33.217719 containerd[1732]: time="2025-06-20T19:15:33.217656240Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:33.222650 containerd[1732]: time="2025-06-20T19:15:33.222604560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:33.223518 containerd[1732]: time="2025-06-20T19:15:33.223466671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 3.211031114s" Jun 20 19:15:33.223518 containerd[1732]: time="2025-06-20T19:15:33.223498638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 20 19:15:33.227474 containerd[1732]: time="2025-06-20T19:15:33.227326426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:15:33.242311 containerd[1732]: time="2025-06-20T19:15:33.241841851Z" level=info msg="CreateContainer within sandbox \"ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 19:15:33.261011 containerd[1732]: time="2025-06-20T19:15:33.260976232Z" level=info msg="Container e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:33.267867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1793059225.mount: Deactivated successfully. Jun 20 19:15:33.278206 containerd[1732]: time="2025-06-20T19:15:33.278175343Z" level=info msg="CreateContainer within sandbox \"ab73ff4aed237263d116547ba5d24bbb3479b08cc0c8a964487fb58b1cdc9279\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9\"" Jun 20 19:15:33.278724 containerd[1732]: time="2025-06-20T19:15:33.278620200Z" level=info msg="StartContainer for \"e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9\"" Jun 20 19:15:33.279957 containerd[1732]: time="2025-06-20T19:15:33.279883638Z" level=info msg="connecting to shim e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9" address="unix:///run/containerd/s/6b1ec3d68f64aeac73867ae97e9ce77ed2ccf5bdca2c72f6ba89c3bddfd75dc9" protocol=ttrpc version=3 Jun 20 19:15:33.299839 systemd[1]: Started cri-containerd-e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9.scope - libcontainer container e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9. Jun 20 19:15:33.345758 containerd[1732]: time="2025-06-20T19:15:33.345690854Z" level=info msg="StartContainer for \"e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9\" returns successfully" Jun 20 19:15:34.307947 containerd[1732]: time="2025-06-20T19:15:34.307898200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9\" id:\"f52350988a67e2b0333d5f2a92c8728f93ccfe91e96e594957eb8fd73f283feb\" pid:5343 exited_at:{seconds:1750446934 nanos:307627455}" Jun 20 19:15:34.328070 kubelet[3133]: I0620 19:15:34.327616 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6595c785f9-rstkf" podStartSLOduration=39.495612544 podStartE2EDuration="44.327596641s" podCreationTimestamp="2025-06-20 19:14:50 +0000 UTC" firstStartedPulling="2025-06-20 19:15:28.392565031 +0000 UTC m=+55.516704852" lastFinishedPulling="2025-06-20 19:15:33.224549136 +0000 UTC m=+60.348688949" observedRunningTime="2025-06-20 19:15:34.271828702 +0000 UTC m=+61.395968523" watchObservedRunningTime="2025-06-20 19:15:34.327596641 +0000 UTC m=+61.451736455" Jun 20 19:15:36.311902 containerd[1732]: time="2025-06-20T19:15:36.311851901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:36.316583 containerd[1732]: time="2025-06-20T19:15:36.316539789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 20 19:15:36.320307 containerd[1732]: time="2025-06-20T19:15:36.320257957Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:36.325542 containerd[1732]: time="2025-06-20T19:15:36.325480077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:36.326043 containerd[1732]: time="2025-06-20T19:15:36.325912841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 3.09824436s" Jun 20 19:15:36.326043 containerd[1732]: time="2025-06-20T19:15:36.325945753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:15:36.327013 containerd[1732]: time="2025-06-20T19:15:36.326988682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:15:36.328373 containerd[1732]: time="2025-06-20T19:15:36.328318053Z" level=info msg="CreateContainer within sandbox \"6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:15:36.348724 containerd[1732]: time="2025-06-20T19:15:36.347852131Z" level=info msg="Container 37bc0da5270a53a6ec767266e8a7472533870568167db2a5db7a37042061fab8: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:36.364877 containerd[1732]: time="2025-06-20T19:15:36.364841124Z" level=info msg="CreateContainer within sandbox \"6958e600baf337c3d17d5b35700d5386657c568484b03713f3f3171d9ccb1132\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"37bc0da5270a53a6ec767266e8a7472533870568167db2a5db7a37042061fab8\"" Jun 20 19:15:36.365477 containerd[1732]: time="2025-06-20T19:15:36.365397049Z" level=info msg="StartContainer for \"37bc0da5270a53a6ec767266e8a7472533870568167db2a5db7a37042061fab8\"" Jun 20 19:15:36.366591 containerd[1732]: time="2025-06-20T19:15:36.366518670Z" level=info msg="connecting to shim 37bc0da5270a53a6ec767266e8a7472533870568167db2a5db7a37042061fab8" address="unix:///run/containerd/s/fa8baf06413139d8bccf357e0e837de457fbd021b83a9f09f2e7c9fe75982540" protocol=ttrpc version=3 Jun 20 19:15:36.387866 systemd[1]: Started cri-containerd-37bc0da5270a53a6ec767266e8a7472533870568167db2a5db7a37042061fab8.scope - libcontainer container 37bc0da5270a53a6ec767266e8a7472533870568167db2a5db7a37042061fab8. Jun 20 19:15:36.434440 containerd[1732]: time="2025-06-20T19:15:36.434363737Z" level=info msg="StartContainer for \"37bc0da5270a53a6ec767266e8a7472533870568167db2a5db7a37042061fab8\" returns successfully" Jun 20 19:15:36.688483 containerd[1732]: time="2025-06-20T19:15:36.688260956Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:36.692484 containerd[1732]: time="2025-06-20T19:15:36.692447652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 19:15:36.693887 containerd[1732]: time="2025-06-20T19:15:36.693865962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 366.846644ms" Jun 20 19:15:36.693974 containerd[1732]: time="2025-06-20T19:15:36.693963853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:15:36.696689 containerd[1732]: time="2025-06-20T19:15:36.696471056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 19:15:36.697619 containerd[1732]: time="2025-06-20T19:15:36.697592464Z" level=info msg="CreateContainer within sandbox \"394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:15:36.722200 containerd[1732]: time="2025-06-20T19:15:36.722160104Z" level=info msg="Container 45701ba752270edc9a1420329ed09a421a86eb4255b03a5c582e0c089a059437: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:36.746429 containerd[1732]: time="2025-06-20T19:15:36.745877819Z" level=info msg="CreateContainer within sandbox \"394bf87bda054fbee7c9b0da885664902e98e71e55836bc9f2988d419f3332d9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"45701ba752270edc9a1420329ed09a421a86eb4255b03a5c582e0c089a059437\"" Jun 20 19:15:36.746926 containerd[1732]: time="2025-06-20T19:15:36.746892092Z" level=info msg="StartContainer for \"45701ba752270edc9a1420329ed09a421a86eb4255b03a5c582e0c089a059437\"" Jun 20 19:15:36.748207 containerd[1732]: time="2025-06-20T19:15:36.748173596Z" level=info msg="connecting to shim 45701ba752270edc9a1420329ed09a421a86eb4255b03a5c582e0c089a059437" address="unix:///run/containerd/s/b20ca275234c741586a6ff8ef8c13d6f7d2cee6054082da36e4141290614b7e3" protocol=ttrpc version=3 Jun 20 19:15:36.768892 systemd[1]: Started cri-containerd-45701ba752270edc9a1420329ed09a421a86eb4255b03a5c582e0c089a059437.scope - libcontainer container 45701ba752270edc9a1420329ed09a421a86eb4255b03a5c582e0c089a059437. Jun 20 19:15:36.824967 containerd[1732]: time="2025-06-20T19:15:36.824931459Z" level=info msg="StartContainer for \"45701ba752270edc9a1420329ed09a421a86eb4255b03a5c582e0c089a059437\" returns successfully" Jun 20 19:15:37.294033 kubelet[3133]: I0620 19:15:37.293957 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56bc98767b-z6f9b" podStartSLOduration=42.435638572 podStartE2EDuration="50.29393414s" podCreationTimestamp="2025-06-20 19:14:47 +0000 UTC" firstStartedPulling="2025-06-20 19:15:28.468573904 +0000 UTC m=+55.592713715" lastFinishedPulling="2025-06-20 19:15:36.326869474 +0000 UTC m=+63.451009283" observedRunningTime="2025-06-20 19:15:37.292024281 +0000 UTC m=+64.416164097" watchObservedRunningTime="2025-06-20 19:15:37.29393414 +0000 UTC m=+64.418073953" Jun 20 19:15:37.319721 kubelet[3133]: I0620 19:15:37.318753 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56bc98767b-dgvxb" podStartSLOduration=43.06059027 podStartE2EDuration="50.318730114s" podCreationTimestamp="2025-06-20 19:14:47 +0000 UTC" firstStartedPulling="2025-06-20 19:15:29.436804936 +0000 UTC m=+56.560944745" lastFinishedPulling="2025-06-20 19:15:36.694944774 +0000 UTC m=+63.819084589" observedRunningTime="2025-06-20 19:15:37.317800835 +0000 UTC m=+64.441940653" watchObservedRunningTime="2025-06-20 19:15:37.318730114 +0000 UTC m=+64.442869935" Jun 20 19:15:38.145672 containerd[1732]: time="2025-06-20T19:15:38.145622831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:38.148664 containerd[1732]: time="2025-06-20T19:15:38.148625329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 20 19:15:38.155589 containerd[1732]: time="2025-06-20T19:15:38.155517017Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:38.159113 containerd[1732]: time="2025-06-20T19:15:38.159039349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:38.159779 containerd[1732]: time="2025-06-20T19:15:38.159492143Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 1.462990384s" Jun 20 19:15:38.159779 containerd[1732]: time="2025-06-20T19:15:38.159523373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 20 19:15:38.161716 containerd[1732]: time="2025-06-20T19:15:38.161675586Z" level=info msg="CreateContainer within sandbox \"d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 19:15:38.182719 containerd[1732]: time="2025-06-20T19:15:38.182652042Z" level=info msg="Container ba600c082d7389e4aa4f1455d1af253b440889b3c66a62b3f14b8e62aea5a4fd: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:38.205174 containerd[1732]: time="2025-06-20T19:15:38.205133595Z" level=info msg="CreateContainer within sandbox \"d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ba600c082d7389e4aa4f1455d1af253b440889b3c66a62b3f14b8e62aea5a4fd\"" Jun 20 19:15:38.205899 containerd[1732]: time="2025-06-20T19:15:38.205591262Z" level=info msg="StartContainer for \"ba600c082d7389e4aa4f1455d1af253b440889b3c66a62b3f14b8e62aea5a4fd\"" Jun 20 19:15:38.207317 containerd[1732]: time="2025-06-20T19:15:38.207265439Z" level=info msg="connecting to shim ba600c082d7389e4aa4f1455d1af253b440889b3c66a62b3f14b8e62aea5a4fd" address="unix:///run/containerd/s/c50464dd9be3d5d4752501d7088378fb552cd9f036e6b9f3f482a16bb921d673" protocol=ttrpc version=3 Jun 20 19:15:38.226905 systemd[1]: Started cri-containerd-ba600c082d7389e4aa4f1455d1af253b440889b3c66a62b3f14b8e62aea5a4fd.scope - libcontainer container ba600c082d7389e4aa4f1455d1af253b440889b3c66a62b3f14b8e62aea5a4fd. Jun 20 19:15:38.261784 containerd[1732]: time="2025-06-20T19:15:38.261746840Z" level=info msg="StartContainer for \"ba600c082d7389e4aa4f1455d1af253b440889b3c66a62b3f14b8e62aea5a4fd\" returns successfully" Jun 20 19:15:38.263007 containerd[1732]: time="2025-06-20T19:15:38.262973094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 19:15:38.275681 kubelet[3133]: I0620 19:15:38.275596 3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:15:38.275681 kubelet[3133]: I0620 19:15:38.275647 3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:15:39.678522 containerd[1732]: time="2025-06-20T19:15:39.678469412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:39.681094 containerd[1732]: time="2025-06-20T19:15:39.681057136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 20 19:15:39.683791 containerd[1732]: time="2025-06-20T19:15:39.683744534Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:39.687406 containerd[1732]: time="2025-06-20T19:15:39.687357169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:39.687919 containerd[1732]: time="2025-06-20T19:15:39.687770734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 1.424762551s" Jun 20 19:15:39.687919 containerd[1732]: time="2025-06-20T19:15:39.687804409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 20 19:15:39.690480 containerd[1732]: time="2025-06-20T19:15:39.690005951Z" level=info msg="CreateContainer within sandbox \"d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 19:15:39.721075 containerd[1732]: time="2025-06-20T19:15:39.719819222Z" level=info msg="Container 7a4a211abdded2ab608bbb10abb418538123ba6bb55e1a6d4e37a6c3949281de: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:39.742360 containerd[1732]: time="2025-06-20T19:15:39.742217134Z" level=info msg="CreateContainer within sandbox \"d8b36a5a7ddb186b70ac1eee6c08a744dcff7d785204d4f6be5bc5c0e8ef264b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7a4a211abdded2ab608bbb10abb418538123ba6bb55e1a6d4e37a6c3949281de\"" Jun 20 19:15:39.743103 containerd[1732]: time="2025-06-20T19:15:39.743046504Z" level=info msg="StartContainer for \"7a4a211abdded2ab608bbb10abb418538123ba6bb55e1a6d4e37a6c3949281de\"" Jun 20 19:15:39.744925 containerd[1732]: time="2025-06-20T19:15:39.744888771Z" level=info msg="connecting to shim 7a4a211abdded2ab608bbb10abb418538123ba6bb55e1a6d4e37a6c3949281de" address="unix:///run/containerd/s/c50464dd9be3d5d4752501d7088378fb552cd9f036e6b9f3f482a16bb921d673" protocol=ttrpc version=3 Jun 20 19:15:39.768916 systemd[1]: Started cri-containerd-7a4a211abdded2ab608bbb10abb418538123ba6bb55e1a6d4e37a6c3949281de.scope - libcontainer container 7a4a211abdded2ab608bbb10abb418538123ba6bb55e1a6d4e37a6c3949281de. Jun 20 19:15:39.803952 containerd[1732]: time="2025-06-20T19:15:39.803838525Z" level=info msg="StartContainer for \"7a4a211abdded2ab608bbb10abb418538123ba6bb55e1a6d4e37a6c3949281de\" returns successfully" Jun 20 19:15:40.120738 kubelet[3133]: I0620 19:15:40.120688 3133 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 19:15:40.120738 kubelet[3133]: I0620 19:15:40.120741 3133 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 19:15:44.048439 kubelet[3133]: I0620 19:15:44.048278 3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:15:44.069732 kubelet[3133]: I0620 19:15:44.068374 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tcgqf" podStartSLOduration=44.751535451 podStartE2EDuration="54.068355451s" podCreationTimestamp="2025-06-20 19:14:50 +0000 UTC" firstStartedPulling="2025-06-20 19:15:30.371749077 +0000 UTC m=+57.495888880" lastFinishedPulling="2025-06-20 19:15:39.688569065 +0000 UTC m=+66.812708880" observedRunningTime="2025-06-20 19:15:40.295327118 +0000 UTC m=+67.419466944" watchObservedRunningTime="2025-06-20 19:15:44.068355451 +0000 UTC m=+71.192495269" Jun 20 19:15:46.037467 containerd[1732]: time="2025-06-20T19:15:46.037421963Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9\" id:\"96c7ae78e254ec5ce56cfc45491bdeb974e26158a7961e6337d3cf41133d8f2d\" pid:5534 exited_at:{seconds:1750446946 nanos:37150076}" Jun 20 19:15:51.252786 containerd[1732]: time="2025-06-20T19:15:51.252741310Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbd6ffdf315ecc94ffff356a0813588b735f49e5db96528afce24fd34fb10381\" id:\"6865432195fbaba0c1051588ecc19dd238c05573da4d8f4d78be44adbee8cbfa\" pid:5557 exited_at:{seconds:1750446951 nanos:251940826}" Jun 20 19:16:00.296322 containerd[1732]: time="2025-06-20T19:16:00.296260269Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387\" id:\"a2245a4b649a77b2fe46f0b21172a6a719cbb1f6b35b5e55e030a2b0f17745bb\" pid:5583 exited_at:{seconds:1750446960 nanos:295828233}" Jun 20 19:16:04.307056 containerd[1732]: time="2025-06-20T19:16:04.306977941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9\" id:\"1bdd20e2809c2fa2980c7736c05c896105fc177025f5af2a3962902bb3e18e3f\" pid:5613 exited_at:{seconds:1750446964 nanos:306385794}" Jun 20 19:16:05.130178 kubelet[3133]: I0620 19:16:05.130056 3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:16:06.569799 containerd[1732]: time="2025-06-20T19:16:06.569738957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387\" id:\"6bce5cb70c02eec8dac3824e89fe3fa60184789e409b022a7b397b8c3cd514f6\" pid:5636 exited_at:{seconds:1750446966 nanos:569288628}" Jun 20 19:16:21.241381 containerd[1732]: time="2025-06-20T19:16:21.241250824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbd6ffdf315ecc94ffff356a0813588b735f49e5db96528afce24fd34fb10381\" id:\"083af4ebf7c54bf79f65b1d49e73dc3a48090fa07f58679e538f0d129a87637f\" pid:5662 exited_at:{seconds:1750446981 nanos:240018338}" Jun 20 19:16:23.681005 systemd[1]: Started sshd@7-10.200.4.4:22-10.200.16.10:59792.service - OpenSSH per-connection server daemon (10.200.16.10:59792). Jun 20 19:16:24.281289 sshd[5680]: Accepted publickey for core from 10.200.16.10 port 59792 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:24.282607 sshd-session[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:24.287249 systemd-logind[1709]: New session 10 of user core. Jun 20 19:16:24.291885 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:16:24.800834 sshd[5682]: Connection closed by 10.200.16.10 port 59792 Jun 20 19:16:24.803923 sshd-session[5680]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:24.809079 systemd[1]: sshd@7-10.200.4.4:22-10.200.16.10:59792.service: Deactivated successfully. Jun 20 19:16:24.812549 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:16:24.816264 systemd-logind[1709]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:16:24.820248 systemd-logind[1709]: Removed session 10. Jun 20 19:16:29.906987 systemd[1]: Started sshd@8-10.200.4.4:22-10.200.16.10:48630.service - OpenSSH per-connection server daemon (10.200.16.10:48630). Jun 20 19:16:30.401569 containerd[1732]: time="2025-06-20T19:16:30.401434848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387\" id:\"f049b3b32b5ec88b56c01161bcdead0bc63ea70f64de6f9a359137a2dbdcf00e\" pid:5711 exited_at:{seconds:1750446990 nanos:400634204}" Jun 20 19:16:30.511962 sshd[5696]: Accepted publickey for core from 10.200.16.10 port 48630 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:30.513262 sshd-session[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:30.517866 systemd-logind[1709]: New session 11 of user core. Jun 20 19:16:30.525871 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:16:30.998206 sshd[5721]: Connection closed by 10.200.16.10 port 48630 Jun 20 19:16:30.998910 sshd-session[5696]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:31.003559 systemd-logind[1709]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:16:31.004503 systemd[1]: sshd@8-10.200.4.4:22-10.200.16.10:48630.service: Deactivated successfully. Jun 20 19:16:31.006502 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:16:31.011967 systemd-logind[1709]: Removed session 11. Jun 20 19:16:34.306115 containerd[1732]: time="2025-06-20T19:16:34.306067347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9\" id:\"a5db1e84452de0c330a3e90de61bb1187c984dcab13c7267d3b815f41bd957d9\" pid:5745 exited_at:{seconds:1750446994 nanos:305566094}" Jun 20 19:16:36.110945 systemd[1]: Started sshd@9-10.200.4.4:22-10.200.16.10:48640.service - OpenSSH per-connection server daemon (10.200.16.10:48640). Jun 20 19:16:36.442846 update_engine[1711]: I20250620 19:16:36.442586 1711 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 20 19:16:36.442846 update_engine[1711]: I20250620 19:16:36.442631 1711 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 20 19:16:36.442846 update_engine[1711]: I20250620 19:16:36.442816 1711 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 20 19:16:36.443655 update_engine[1711]: I20250620 19:16:36.443217 1711 omaha_request_params.cc:62] Current group set to beta Jun 20 19:16:36.443655 update_engine[1711]: I20250620 19:16:36.443328 1711 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 20 19:16:36.443655 update_engine[1711]: I20250620 19:16:36.443334 1711 update_attempter.cc:643] Scheduling an action processor start. Jun 20 19:16:36.443655 update_engine[1711]: I20250620 19:16:36.443352 1711 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 19:16:36.443655 update_engine[1711]: I20250620 19:16:36.443385 1711 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 20 19:16:36.443655 update_engine[1711]: I20250620 19:16:36.443439 1711 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 19:16:36.443655 update_engine[1711]: I20250620 19:16:36.443445 1711 omaha_request_action.cc:272] Request: Jun 20 19:16:36.443655 update_engine[1711]: Jun 20 19:16:36.443655 update_engine[1711]: Jun 20 19:16:36.443655 update_engine[1711]: Jun 20 19:16:36.443655 update_engine[1711]: Jun 20 19:16:36.443655 update_engine[1711]: Jun 20 19:16:36.443655 update_engine[1711]: Jun 20 19:16:36.443655 update_engine[1711]: Jun 20 19:16:36.443655 update_engine[1711]: Jun 20 19:16:36.443655 update_engine[1711]: I20250620 19:16:36.443451 1711 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:16:36.444569 locksmithd[1777]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 20 19:16:36.444865 update_engine[1711]: I20250620 19:16:36.444820 1711 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:16:36.445240 update_engine[1711]: I20250620 19:16:36.445217 1711 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:16:36.471456 update_engine[1711]: E20250620 19:16:36.471412 1711 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:16:36.471567 update_engine[1711]: I20250620 19:16:36.471503 1711 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 20 19:16:36.727622 sshd[5756]: Accepted publickey for core from 10.200.16.10 port 48640 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:36.728424 sshd-session[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:36.733516 systemd-logind[1709]: New session 12 of user core. Jun 20 19:16:36.737877 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:16:37.197189 sshd[5758]: Connection closed by 10.200.16.10 port 48640 Jun 20 19:16:37.197508 sshd-session[5756]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:37.201190 systemd[1]: sshd@9-10.200.4.4:22-10.200.16.10:48640.service: Deactivated successfully. Jun 20 19:16:37.203345 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:16:37.204481 systemd-logind[1709]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:16:37.205775 systemd-logind[1709]: Removed session 12. Jun 20 19:16:37.306401 systemd[1]: Started sshd@10-10.200.4.4:22-10.200.16.10:48650.service - OpenSSH per-connection server daemon (10.200.16.10:48650). Jun 20 19:16:37.894511 sshd[5771]: Accepted publickey for core from 10.200.16.10 port 48650 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:37.895689 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:37.900258 systemd-logind[1709]: New session 13 of user core. Jun 20 19:16:37.904858 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:16:38.394649 sshd[5773]: Connection closed by 10.200.16.10 port 48650 Jun 20 19:16:38.395242 sshd-session[5771]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:38.398965 systemd[1]: sshd@10-10.200.4.4:22-10.200.16.10:48650.service: Deactivated successfully. Jun 20 19:16:38.400975 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:16:38.401839 systemd-logind[1709]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:16:38.403042 systemd-logind[1709]: Removed session 13. Jun 20 19:16:38.504432 systemd[1]: Started sshd@11-10.200.4.4:22-10.200.16.10:60400.service - OpenSSH per-connection server daemon (10.200.16.10:60400). Jun 20 19:16:39.096559 sshd[5782]: Accepted publickey for core from 10.200.16.10 port 60400 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:39.098471 sshd-session[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:39.102529 systemd-logind[1709]: New session 14 of user core. Jun 20 19:16:39.111913 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:16:39.572418 sshd[5786]: Connection closed by 10.200.16.10 port 60400 Jun 20 19:16:39.573082 sshd-session[5782]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:39.576477 systemd[1]: sshd@11-10.200.4.4:22-10.200.16.10:60400.service: Deactivated successfully. Jun 20 19:16:39.578459 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:16:39.579548 systemd-logind[1709]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:16:39.580994 systemd-logind[1709]: Removed session 14. Jun 20 19:16:44.710269 systemd[1]: Started sshd@12-10.200.4.4:22-10.200.16.10:60404.service - OpenSSH per-connection server daemon (10.200.16.10:60404). Jun 20 19:16:45.331143 sshd[5809]: Accepted publickey for core from 10.200.16.10 port 60404 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:45.332419 sshd-session[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:45.336985 systemd-logind[1709]: New session 15 of user core. Jun 20 19:16:45.340879 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:16:45.811587 sshd[5811]: Connection closed by 10.200.16.10 port 60404 Jun 20 19:16:45.812262 sshd-session[5809]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:45.815037 systemd[1]: sshd@12-10.200.4.4:22-10.200.16.10:60404.service: Deactivated successfully. Jun 20 19:16:45.817174 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:16:45.819252 systemd-logind[1709]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:16:45.820235 systemd-logind[1709]: Removed session 15. Jun 20 19:16:46.038655 containerd[1732]: time="2025-06-20T19:16:46.038601034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9\" id:\"41549fcb536263d6d6a0d362da0f795decd069b66771c8aa7266c8ab67b45402\" pid:5834 exited_at:{seconds:1750447006 nanos:38385607}" Jun 20 19:16:46.441414 update_engine[1711]: I20250620 19:16:46.441354 1711 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:16:46.441935 update_engine[1711]: I20250620 19:16:46.441622 1711 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:16:46.441999 update_engine[1711]: I20250620 19:16:46.441959 1711 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:16:46.459450 update_engine[1711]: E20250620 19:16:46.459411 1711 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:16:46.459541 update_engine[1711]: I20250620 19:16:46.459479 1711 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 20 19:16:50.918358 systemd[1]: Started sshd@13-10.200.4.4:22-10.200.16.10:58092.service - OpenSSH per-connection server daemon (10.200.16.10:58092). Jun 20 19:16:51.235509 containerd[1732]: time="2025-06-20T19:16:51.235374655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbd6ffdf315ecc94ffff356a0813588b735f49e5db96528afce24fd34fb10381\" id:\"8853c2966be0e116b77ee4e285b2a1f0be4273f64c53891e8b8fe2721e64aa79\" pid:5861 exited_at:{seconds:1750447011 nanos:235042394}" Jun 20 19:16:51.508122 sshd[5846]: Accepted publickey for core from 10.200.16.10 port 58092 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:51.509389 sshd-session[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:51.514112 systemd-logind[1709]: New session 16 of user core. Jun 20 19:16:51.517896 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:16:51.975609 sshd[5872]: Connection closed by 10.200.16.10 port 58092 Jun 20 19:16:51.976083 sshd-session[5846]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:51.979601 systemd[1]: sshd@13-10.200.4.4:22-10.200.16.10:58092.service: Deactivated successfully. Jun 20 19:16:51.981771 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:16:51.982962 systemd-logind[1709]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:16:51.984441 systemd-logind[1709]: Removed session 16. Jun 20 19:16:56.443373 update_engine[1711]: I20250620 19:16:56.443277 1711 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:16:56.443910 update_engine[1711]: I20250620 19:16:56.443621 1711 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:16:56.444038 update_engine[1711]: I20250620 19:16:56.444010 1711 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:16:56.644897 update_engine[1711]: E20250620 19:16:56.644822 1711 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:16:56.645021 update_engine[1711]: I20250620 19:16:56.644934 1711 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 20 19:16:57.083855 systemd[1]: Started sshd@14-10.200.4.4:22-10.200.16.10:58094.service - OpenSSH per-connection server daemon (10.200.16.10:58094). Jun 20 19:16:57.677435 sshd[5884]: Accepted publickey for core from 10.200.16.10 port 58094 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:57.678640 sshd-session[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:57.683680 systemd-logind[1709]: New session 17 of user core. Jun 20 19:16:57.688904 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:16:58.147967 sshd[5886]: Connection closed by 10.200.16.10 port 58094 Jun 20 19:16:58.148604 sshd-session[5884]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:58.151879 systemd[1]: sshd@14-10.200.4.4:22-10.200.16.10:58094.service: Deactivated successfully. Jun 20 19:16:58.154102 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:16:58.155186 systemd-logind[1709]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:16:58.156841 systemd-logind[1709]: Removed session 17. Jun 20 19:16:58.257487 systemd[1]: Started sshd@15-10.200.4.4:22-10.200.16.10:58100.service - OpenSSH per-connection server daemon (10.200.16.10:58100). Jun 20 19:16:58.853959 sshd[5912]: Accepted publickey for core from 10.200.16.10 port 58100 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:58.856317 sshd-session[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:58.862888 systemd-logind[1709]: New session 18 of user core. Jun 20 19:16:58.869872 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:16:59.380236 sshd[5918]: Connection closed by 10.200.16.10 port 58100 Jun 20 19:16:59.380862 sshd-session[5912]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:59.384202 systemd[1]: sshd@15-10.200.4.4:22-10.200.16.10:58100.service: Deactivated successfully. Jun 20 19:16:59.386259 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:16:59.387074 systemd-logind[1709]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:16:59.388327 systemd-logind[1709]: Removed session 18. Jun 20 19:16:59.485648 systemd[1]: Started sshd@16-10.200.4.4:22-10.200.16.10:37196.service - OpenSSH per-connection server daemon (10.200.16.10:37196). Jun 20 19:17:00.073818 sshd[5931]: Accepted publickey for core from 10.200.16.10 port 37196 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:00.076880 sshd-session[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:00.085389 systemd-logind[1709]: New session 19 of user core. Jun 20 19:17:00.089006 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:17:00.317979 containerd[1732]: time="2025-06-20T19:17:00.317932613Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387\" id:\"35aca768a67660bf25504a840ad0c447d5f993dcd14556af89f6037eb493d6a6\" pid:5946 exited_at:{seconds:1750447020 nanos:317485935}" Jun 20 19:17:01.370691 sshd[5933]: Connection closed by 10.200.16.10 port 37196 Jun 20 19:17:01.371374 sshd-session[5931]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:01.374884 systemd[1]: sshd@16-10.200.4.4:22-10.200.16.10:37196.service: Deactivated successfully. Jun 20 19:17:01.377170 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:17:01.378209 systemd-logind[1709]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:17:01.379560 systemd-logind[1709]: Removed session 19. Jun 20 19:17:01.480512 systemd[1]: Started sshd@17-10.200.4.4:22-10.200.16.10:37210.service - OpenSSH per-connection server daemon (10.200.16.10:37210). Jun 20 19:17:02.076549 sshd[5972]: Accepted publickey for core from 10.200.16.10 port 37210 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:02.079979 sshd-session[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:02.091768 systemd-logind[1709]: New session 20 of user core. Jun 20 19:17:02.095189 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:17:02.672187 sshd[5974]: Connection closed by 10.200.16.10 port 37210 Jun 20 19:17:02.672920 sshd-session[5972]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:02.676676 systemd[1]: sshd@17-10.200.4.4:22-10.200.16.10:37210.service: Deactivated successfully. Jun 20 19:17:02.678571 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:17:02.679344 systemd-logind[1709]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:17:02.681281 systemd-logind[1709]: Removed session 20. Jun 20 19:17:02.788155 systemd[1]: Started sshd@18-10.200.4.4:22-10.200.16.10:37224.service - OpenSSH per-connection server daemon (10.200.16.10:37224). Jun 20 19:17:03.384891 sshd[5984]: Accepted publickey for core from 10.200.16.10 port 37224 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:03.386970 sshd-session[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:03.392754 systemd-logind[1709]: New session 21 of user core. Jun 20 19:17:03.398921 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:17:03.951775 sshd[5986]: Connection closed by 10.200.16.10 port 37224 Jun 20 19:17:03.952392 sshd-session[5984]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:03.955638 systemd-logind[1709]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:17:03.956175 systemd[1]: sshd@18-10.200.4.4:22-10.200.16.10:37224.service: Deactivated successfully. Jun 20 19:17:03.958956 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:17:03.962999 systemd-logind[1709]: Removed session 21. Jun 20 19:17:04.296856 containerd[1732]: time="2025-06-20T19:17:04.296808497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9\" id:\"7855b827d08a077eddb781acced36028b2c74799ec624b79d98ac7ee60992190\" pid:6009 exited_at:{seconds:1750447024 nanos:296547199}" Jun 20 19:17:06.443512 update_engine[1711]: I20250620 19:17:06.442890 1711 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:17:06.443512 update_engine[1711]: I20250620 19:17:06.443171 1711 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:17:06.443512 update_engine[1711]: I20250620 19:17:06.443466 1711 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:17:06.460356 update_engine[1711]: E20250620 19:17:06.460295 1711 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:17:06.460733 update_engine[1711]: I20250620 19:17:06.460497 1711 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 19:17:06.460733 update_engine[1711]: I20250620 19:17:06.460506 1711 omaha_request_action.cc:617] Omaha request response: Jun 20 19:17:06.460894 update_engine[1711]: E20250620 19:17:06.460774 1711 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 20 19:17:06.460894 update_engine[1711]: I20250620 19:17:06.460794 1711 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 20 19:17:06.460894 update_engine[1711]: I20250620 19:17:06.460799 1711 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:17:06.460894 update_engine[1711]: I20250620 19:17:06.460803 1711 update_attempter.cc:306] Processing Done. Jun 20 19:17:06.460894 update_engine[1711]: E20250620 19:17:06.460820 1711 update_attempter.cc:619] Update failed. Jun 20 19:17:06.460894 update_engine[1711]: I20250620 19:17:06.460824 1711 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 20 19:17:06.461549 update_engine[1711]: I20250620 19:17:06.460829 1711 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 20 19:17:06.461549 update_engine[1711]: I20250620 19:17:06.460930 1711 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 20 19:17:06.461549 update_engine[1711]: I20250620 19:17:06.461371 1711 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 19:17:06.461549 update_engine[1711]: I20250620 19:17:06.461406 1711 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 19:17:06.461549 update_engine[1711]: I20250620 19:17:06.461423 1711 omaha_request_action.cc:272] Request: Jun 20 19:17:06.461549 update_engine[1711]: Jun 20 19:17:06.461549 update_engine[1711]: Jun 20 19:17:06.461549 update_engine[1711]: Jun 20 19:17:06.461549 update_engine[1711]: Jun 20 19:17:06.461549 update_engine[1711]: Jun 20 19:17:06.461549 update_engine[1711]: Jun 20 19:17:06.461549 update_engine[1711]: I20250620 19:17:06.461431 1711 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:17:06.462095 locksmithd[1777]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 20 19:17:06.462724 update_engine[1711]: I20250620 19:17:06.461974 1711 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:17:06.462724 update_engine[1711]: I20250620 19:17:06.462435 1711 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:17:06.489162 update_engine[1711]: E20250620 19:17:06.488937 1711 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:17:06.489162 update_engine[1711]: I20250620 19:17:06.489024 1711 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 19:17:06.489162 update_engine[1711]: I20250620 19:17:06.489030 1711 omaha_request_action.cc:617] Omaha request response: Jun 20 19:17:06.489162 update_engine[1711]: I20250620 19:17:06.489037 1711 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:17:06.489162 update_engine[1711]: I20250620 19:17:06.489042 1711 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:17:06.489162 update_engine[1711]: I20250620 19:17:06.489046 1711 update_attempter.cc:306] Processing Done. Jun 20 19:17:06.489162 update_engine[1711]: I20250620 19:17:06.489053 1711 update_attempter.cc:310] Error event sent. Jun 20 19:17:06.489162 update_engine[1711]: I20250620 19:17:06.489063 1711 update_check_scheduler.cc:74] Next update check in 41m9s Jun 20 19:17:06.489841 locksmithd[1777]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 20 19:17:06.584833 containerd[1732]: time="2025-06-20T19:17:06.584792862Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387\" id:\"5529c9cbcb6d5ef72c418df54de90234343b465ae9790b4ffb716452f8b34c52\" pid:6030 exited_at:{seconds:1750447026 nanos:584323533}" Jun 20 19:17:09.058465 systemd[1]: Started sshd@19-10.200.4.4:22-10.200.16.10:40180.service - OpenSSH per-connection server daemon (10.200.16.10:40180). Jun 20 19:17:09.655163 sshd[6045]: Accepted publickey for core from 10.200.16.10 port 40180 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:09.656429 sshd-session[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:09.660757 systemd-logind[1709]: New session 22 of user core. Jun 20 19:17:09.666838 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:17:10.123383 sshd[6047]: Connection closed by 10.200.16.10 port 40180 Jun 20 19:17:10.124036 sshd-session[6045]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:10.126973 systemd[1]: sshd@19-10.200.4.4:22-10.200.16.10:40180.service: Deactivated successfully. Jun 20 19:17:10.128924 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:17:10.131208 systemd-logind[1709]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:17:10.132127 systemd-logind[1709]: Removed session 22. Jun 20 19:17:15.235945 systemd[1]: Started sshd@20-10.200.4.4:22-10.200.16.10:40184.service - OpenSSH per-connection server daemon (10.200.16.10:40184). Jun 20 19:17:15.844272 sshd[6059]: Accepted publickey for core from 10.200.16.10 port 40184 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:15.847294 sshd-session[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:15.858568 systemd-logind[1709]: New session 23 of user core. Jun 20 19:17:15.860346 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:17:16.331837 sshd[6061]: Connection closed by 10.200.16.10 port 40184 Jun 20 19:17:16.334905 sshd-session[6059]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:16.338369 systemd-logind[1709]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:17:16.339153 systemd[1]: sshd@20-10.200.4.4:22-10.200.16.10:40184.service: Deactivated successfully. Jun 20 19:17:16.342913 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:17:16.346900 systemd-logind[1709]: Removed session 23. Jun 20 19:17:21.230447 containerd[1732]: time="2025-06-20T19:17:21.230378704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbd6ffdf315ecc94ffff356a0813588b735f49e5db96528afce24fd34fb10381\" id:\"c91d95e00614998d9514ef9241d5641bbfcbd7de712bdc7a32c67d4f5b0db7c8\" pid:6084 exited_at:{seconds:1750447041 nanos:229989743}" Jun 20 19:17:21.447396 systemd[1]: Started sshd@21-10.200.4.4:22-10.200.16.10:34474.service - OpenSSH per-connection server daemon (10.200.16.10:34474). Jun 20 19:17:22.034998 sshd[6097]: Accepted publickey for core from 10.200.16.10 port 34474 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:22.036301 sshd-session[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:22.040757 systemd-logind[1709]: New session 24 of user core. Jun 20 19:17:22.044854 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:17:22.499866 sshd[6099]: Connection closed by 10.200.16.10 port 34474 Jun 20 19:17:22.500720 sshd-session[6097]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:22.504183 systemd[1]: sshd@21-10.200.4.4:22-10.200.16.10:34474.service: Deactivated successfully. Jun 20 19:17:22.506254 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:17:22.507135 systemd-logind[1709]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:17:22.508466 systemd-logind[1709]: Removed session 24. Jun 20 19:17:27.623594 systemd[1]: Started sshd@22-10.200.4.4:22-10.200.16.10:34482.service - OpenSSH per-connection server daemon (10.200.16.10:34482). Jun 20 19:17:28.234289 sshd[6111]: Accepted publickey for core from 10.200.16.10 port 34482 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:28.238745 sshd-session[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:28.243837 systemd-logind[1709]: New session 25 of user core. Jun 20 19:17:28.251866 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:17:28.716884 sshd[6113]: Connection closed by 10.200.16.10 port 34482 Jun 20 19:17:28.717516 sshd-session[6111]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:28.721013 systemd[1]: sshd@22-10.200.4.4:22-10.200.16.10:34482.service: Deactivated successfully. Jun 20 19:17:28.723104 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:17:28.723939 systemd-logind[1709]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:17:28.725545 systemd-logind[1709]: Removed session 25. Jun 20 19:17:30.294576 containerd[1732]: time="2025-06-20T19:17:30.294369703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9edbd9e376f5d30d997cc71ab35e90d849937fdd1c66b645f62f2a61a06c387\" id:\"bc290c22f387f6d17f9672b7281ed3a618db16179a2b15f63aec4af352654934\" pid:6137 exited_at:{seconds:1750447050 nanos:293966612}" Jun 20 19:17:33.831286 systemd[1]: Started sshd@23-10.200.4.4:22-10.200.16.10:40652.service - OpenSSH per-connection server daemon (10.200.16.10:40652). Jun 20 19:17:34.335931 containerd[1732]: time="2025-06-20T19:17:34.335889624Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e71c4892156c7d122d86bb1c80b7ee5f057c3eb2971ca2b5767cdc352c888bc9\" id:\"835969bd553ca6dd7c4cfb53fa45d5db3c87f53d525ff9d388bf93414132c67e\" pid:6164 exited_at:{seconds:1750447054 nanos:335355530}" Jun 20 19:17:34.425148 sshd[6150]: Accepted publickey for core from 10.200.16.10 port 40652 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:34.427131 sshd-session[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:34.432651 systemd-logind[1709]: New session 26 of user core. Jun 20 19:17:34.437950 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:17:34.961718 sshd[6174]: Connection closed by 10.200.16.10 port 40652 Jun 20 19:17:34.962926 sshd-session[6150]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:34.966467 systemd[1]: sshd@23-10.200.4.4:22-10.200.16.10:40652.service: Deactivated successfully. Jun 20 19:17:34.969779 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:17:34.973281 systemd-logind[1709]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:17:34.975574 systemd-logind[1709]: Removed session 26.