Jul 15 05:17:23.956717 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 03:28:48 -00 2025 Jul 15 05:17:23.956742 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:17:23.956751 kernel: BIOS-provided physical RAM map: Jul 15 05:17:23.956757 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 05:17:23.956763 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 15 05:17:23.956769 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jul 15 05:17:23.956776 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jul 15 05:17:23.956783 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jul 15 05:17:23.956789 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jul 15 05:17:23.956795 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 15 05:17:23.956801 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 15 05:17:23.956807 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 15 05:17:23.956812 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 15 05:17:23.956818 kernel: printk: legacy bootconsole [earlyser0] enabled Jul 15 05:17:23.956827 kernel: NX (Execute Disable) protection: active Jul 15 05:17:23.956834 kernel: APIC: Static calls initialized Jul 15 05:17:23.956840 kernel: efi: EFI v2.7 by Microsoft Jul 15 05:17:23.956867 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Jul 15 05:17:23.956874 kernel: random: crng init done Jul 15 05:17:23.956881 kernel: secureboot: Secure boot disabled Jul 15 05:17:23.956887 kernel: SMBIOS 3.1.0 present. Jul 15 05:17:23.956894 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jul 15 05:17:23.956900 kernel: DMI: Memory slots populated: 2/2 Jul 15 05:17:23.956908 kernel: Hypervisor detected: Microsoft Hyper-V Jul 15 05:17:23.956914 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jul 15 05:17:23.956920 kernel: Hyper-V: Nested features: 0x3e0101 Jul 15 05:17:23.956926 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 15 05:17:23.956933 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 15 05:17:23.956939 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 15 05:17:23.956946 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 15 05:17:23.956952 kernel: tsc: Detected 2300.000 MHz processor Jul 15 05:17:23.956958 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 05:17:23.956965 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 05:17:23.956973 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jul 15 05:17:23.956980 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 15 05:17:23.956987 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 05:17:23.956993 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jul 15 05:17:23.957000 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jul 15 05:17:23.957006 kernel: Using GB pages for direct mapping Jul 15 05:17:23.957013 kernel: ACPI: Early table checksum verification disabled Jul 15 05:17:23.957023 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 15 05:17:23.957031 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 05:17:23.957037 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 05:17:23.957044 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 15 05:17:23.957051 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 15 05:17:23.957058 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 05:17:23.957064 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 05:17:23.957073 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 05:17:23.957079 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 15 05:17:23.957086 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 15 05:17:23.957092 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 05:17:23.957099 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 15 05:17:23.957109 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jul 15 05:17:23.957118 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 15 05:17:23.957124 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 15 05:17:23.957130 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 15 05:17:23.957138 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 15 05:17:23.957144 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jul 15 05:17:23.957151 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jul 15 05:17:23.957157 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 15 05:17:23.957168 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 15 05:17:23.957175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jul 15 05:17:23.957182 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jul 15 05:17:23.957189 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jul 15 05:17:23.957195 kernel: Zone ranges: Jul 15 05:17:23.957203 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 05:17:23.957210 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 15 05:17:23.957217 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 15 05:17:23.957223 kernel: Device empty Jul 15 05:17:23.957229 kernel: Movable zone start for each node Jul 15 05:17:23.957236 kernel: Early memory node ranges Jul 15 05:17:23.957242 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 15 05:17:23.957249 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jul 15 05:17:23.957255 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jul 15 05:17:23.957263 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 15 05:17:23.957269 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 15 05:17:23.957276 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 15 05:17:23.957282 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 05:17:23.957289 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 15 05:17:23.957295 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 15 05:17:23.957302 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jul 15 05:17:23.957309 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 15 05:17:23.957316 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 05:17:23.957324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 05:17:23.957330 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 05:17:23.957337 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 15 05:17:23.957344 kernel: TSC deadline timer available Jul 15 05:17:23.957350 kernel: CPU topo: Max. logical packages: 1 Jul 15 05:17:23.957357 kernel: CPU topo: Max. logical dies: 1 Jul 15 05:17:23.957363 kernel: CPU topo: Max. dies per package: 1 Jul 15 05:17:23.957370 kernel: CPU topo: Max. threads per core: 2 Jul 15 05:17:23.957376 kernel: CPU topo: Num. cores per package: 1 Jul 15 05:17:23.957384 kernel: CPU topo: Num. threads per package: 2 Jul 15 05:17:23.957391 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 15 05:17:23.957398 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 15 05:17:23.957404 kernel: Booting paravirtualized kernel on Hyper-V Jul 15 05:17:23.957411 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 05:17:23.957417 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 15 05:17:23.957424 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 15 05:17:23.957430 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 15 05:17:23.957437 kernel: pcpu-alloc: [0] 0 1 Jul 15 05:17:23.957444 kernel: Hyper-V: PV spinlocks enabled Jul 15 05:17:23.957451 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 05:17:23.957459 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:17:23.957466 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 05:17:23.957472 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 15 05:17:23.957479 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 05:17:23.957486 kernel: Fallback order for Node 0: 0 Jul 15 05:17:23.957492 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jul 15 05:17:23.957500 kernel: Policy zone: Normal Jul 15 05:17:23.957507 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 05:17:23.957513 kernel: software IO TLB: area num 2. Jul 15 05:17:23.957520 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 05:17:23.957526 kernel: ftrace: allocating 40097 entries in 157 pages Jul 15 05:17:23.957533 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 05:17:23.957540 kernel: Dynamic Preempt: voluntary Jul 15 05:17:23.957546 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 05:17:23.957554 kernel: rcu: RCU event tracing is enabled. Jul 15 05:17:23.957567 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 05:17:23.957574 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 05:17:23.957581 kernel: Rude variant of Tasks RCU enabled. Jul 15 05:17:23.957589 kernel: Tracing variant of Tasks RCU enabled. Jul 15 05:17:23.957596 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 05:17:23.957601 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 05:17:23.957605 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:17:23.957610 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:17:23.957614 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:17:23.957619 kernel: Using NULL legacy PIC Jul 15 05:17:23.957625 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 15 05:17:23.957629 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 05:17:23.957634 kernel: Console: colour dummy device 80x25 Jul 15 05:17:23.957638 kernel: printk: legacy console [tty1] enabled Jul 15 05:17:23.957643 kernel: printk: legacy console [ttyS0] enabled Jul 15 05:17:23.957648 kernel: printk: legacy bootconsole [earlyser0] disabled Jul 15 05:17:23.957652 kernel: ACPI: Core revision 20240827 Jul 15 05:17:23.957657 kernel: Failed to register legacy timer interrupt Jul 15 05:17:23.957662 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 05:17:23.957666 kernel: x2apic enabled Jul 15 05:17:23.957671 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 05:17:23.957675 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 15 05:17:23.957680 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 15 05:17:23.957687 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jul 15 05:17:23.957694 kernel: Hyper-V: Using IPI hypercalls Jul 15 05:17:23.957701 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 15 05:17:23.957709 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 15 05:17:23.957716 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 15 05:17:23.957724 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 15 05:17:23.957731 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 15 05:17:23.957739 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 15 05:17:23.957746 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 15 05:17:23.957753 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jul 15 05:17:23.957757 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 05:17:23.957763 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 15 05:17:23.957767 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 15 05:17:23.957771 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 05:17:23.957776 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 05:17:23.957780 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 05:17:23.957784 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 15 05:17:23.957789 kernel: RETBleed: Vulnerable Jul 15 05:17:23.957797 kernel: Speculative Store Bypass: Vulnerable Jul 15 05:17:23.957806 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 15 05:17:23.957813 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 05:17:23.957819 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 05:17:23.957824 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 05:17:23.957829 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 15 05:17:23.957833 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 15 05:17:23.957837 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 15 05:17:23.957842 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jul 15 05:17:23.958007 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jul 15 05:17:23.958015 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jul 15 05:17:23.958023 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 05:17:23.958030 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 15 05:17:23.958035 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 15 05:17:23.958039 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 15 05:17:23.958045 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jul 15 05:17:23.958050 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jul 15 05:17:23.958054 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jul 15 05:17:23.958059 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jul 15 05:17:23.958069 kernel: Freeing SMP alternatives memory: 32K Jul 15 05:17:23.958077 kernel: pid_max: default: 32768 minimum: 301 Jul 15 05:17:23.958084 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 05:17:23.958089 kernel: landlock: Up and running. Jul 15 05:17:23.958094 kernel: SELinux: Initializing. Jul 15 05:17:23.958098 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 05:17:23.958103 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 05:17:23.958108 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jul 15 05:17:23.958113 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jul 15 05:17:23.958124 kernel: signal: max sigframe size: 11952 Jul 15 05:17:23.958136 kernel: rcu: Hierarchical SRCU implementation. Jul 15 05:17:23.958146 kernel: rcu: Max phase no-delay instances is 400. Jul 15 05:17:23.958156 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 05:17:23.958164 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 15 05:17:23.958169 kernel: smp: Bringing up secondary CPUs ... Jul 15 05:17:23.958175 kernel: smpboot: x86: Booting SMP configuration: Jul 15 05:17:23.958180 kernel: .... node #0, CPUs: #1 Jul 15 05:17:23.958187 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 05:17:23.958193 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jul 15 05:17:23.958199 kernel: Memory: 8077024K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 299988K reserved, 0K cma-reserved) Jul 15 05:17:23.958204 kernel: devtmpfs: initialized Jul 15 05:17:23.958210 kernel: x86/mm: Memory block size: 128MB Jul 15 05:17:23.958216 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 15 05:17:23.958221 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 05:17:23.958228 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 05:17:23.958239 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 05:17:23.958251 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 05:17:23.958260 kernel: audit: initializing netlink subsys (disabled) Jul 15 05:17:23.958268 kernel: audit: type=2000 audit(1752556640.028:1): state=initialized audit_enabled=0 res=1 Jul 15 05:17:23.958274 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 05:17:23.958279 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 05:17:23.958285 kernel: cpuidle: using governor menu Jul 15 05:17:23.958291 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 05:17:23.958298 kernel: dca service started, version 1.12.1 Jul 15 05:17:23.958308 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jul 15 05:17:23.958320 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jul 15 05:17:23.958329 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 05:17:23.958334 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 05:17:23.958339 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 05:17:23.958344 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 05:17:23.958348 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 05:17:23.958353 kernel: ACPI: Added _OSI(Module Device) Jul 15 05:17:23.958360 kernel: ACPI: Added _OSI(Processor Device) Jul 15 05:17:23.958373 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 05:17:23.958380 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 05:17:23.958385 kernel: ACPI: Interpreter enabled Jul 15 05:17:23.958389 kernel: ACPI: PM: (supports S0 S5) Jul 15 05:17:23.958394 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 05:17:23.958398 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 05:17:23.958403 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 15 05:17:23.958411 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 15 05:17:23.958419 kernel: iommu: Default domain type: Translated Jul 15 05:17:23.958426 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 05:17:23.958434 kernel: efivars: Registered efivars operations Jul 15 05:17:23.958441 kernel: PCI: Using ACPI for IRQ routing Jul 15 05:17:23.958446 kernel: PCI: System does not support PCI Jul 15 05:17:23.958451 kernel: vgaarb: loaded Jul 15 05:17:23.958456 kernel: clocksource: Switched to clocksource tsc-early Jul 15 05:17:23.958460 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 05:17:23.958465 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 05:17:23.958471 kernel: pnp: PnP ACPI init Jul 15 05:17:23.958479 kernel: pnp: PnP ACPI: found 3 devices Jul 15 05:17:23.958489 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 05:17:23.958496 kernel: NET: Registered PF_INET protocol family Jul 15 05:17:23.958505 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 15 05:17:23.958510 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 15 05:17:23.958514 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 05:17:23.958519 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 05:17:23.958524 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 15 05:17:23.958528 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 15 05:17:23.958537 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 15 05:17:23.958546 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 15 05:17:23.958553 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 05:17:23.958558 kernel: NET: Registered PF_XDP protocol family Jul 15 05:17:23.958562 kernel: PCI: CLS 0 bytes, default 64 Jul 15 05:17:23.958567 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 15 05:17:23.958571 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Jul 15 05:17:23.958576 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jul 15 05:17:23.958584 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jul 15 05:17:23.958597 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 15 05:17:23.958604 kernel: clocksource: Switched to clocksource tsc Jul 15 05:17:23.958610 kernel: Initialise system trusted keyrings Jul 15 05:17:23.958616 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 15 05:17:23.958621 kernel: Key type asymmetric registered Jul 15 05:17:23.958627 kernel: Asymmetric key parser 'x509' registered Jul 15 05:17:23.958635 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 05:17:23.958645 kernel: io scheduler mq-deadline registered Jul 15 05:17:23.958659 kernel: io scheduler kyber registered Jul 15 05:17:23.958668 kernel: io scheduler bfq registered Jul 15 05:17:23.958673 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 05:17:23.958679 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 05:17:23.958685 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 05:17:23.958690 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 15 05:17:23.958696 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 05:17:23.958701 kernel: i8042: PNP: No PS/2 controller found. Jul 15 05:17:23.958831 kernel: rtc_cmos 00:02: registered as rtc0 Jul 15 05:17:23.958934 kernel: rtc_cmos 00:02: setting system clock to 2025-07-15T05:17:23 UTC (1752556643) Jul 15 05:17:23.959011 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 15 05:17:23.959018 kernel: intel_pstate: Intel P-state driver initializing Jul 15 05:17:23.959024 kernel: efifb: probing for efifb Jul 15 05:17:23.959030 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 15 05:17:23.959036 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 15 05:17:23.959042 kernel: efifb: scrolling: redraw Jul 15 05:17:23.959048 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 05:17:23.959057 kernel: Console: switching to colour frame buffer device 128x48 Jul 15 05:17:23.959070 kernel: fb0: EFI VGA frame buffer device Jul 15 05:17:23.959079 kernel: pstore: Using crash dump compression: deflate Jul 15 05:17:23.959088 kernel: pstore: Registered efi_pstore as persistent store backend Jul 15 05:17:23.959111 kernel: NET: Registered PF_INET6 protocol family Jul 15 05:17:23.959118 kernel: Segment Routing with IPv6 Jul 15 05:17:23.959124 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 05:17:23.959129 kernel: NET: Registered PF_PACKET protocol family Jul 15 05:17:23.959135 kernel: Key type dns_resolver registered Jul 15 05:17:23.959141 kernel: IPI shorthand broadcast: enabled Jul 15 05:17:23.959147 kernel: sched_clock: Marking stable (2716360423, 81960974)->(3078464751, -280143354) Jul 15 05:17:23.959153 kernel: registered taskstats version 1 Jul 15 05:17:23.959166 kernel: Loading compiled-in X.509 certificates Jul 15 05:17:23.959172 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: a24478b628e55368911ce1800a2bd6bc158938c7' Jul 15 05:17:23.959177 kernel: Demotion targets for Node 0: null Jul 15 05:17:23.959183 kernel: Key type .fscrypt registered Jul 15 05:17:23.959188 kernel: Key type fscrypt-provisioning registered Jul 15 05:17:23.959194 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 05:17:23.959200 kernel: ima: Allocated hash algorithm: sha1 Jul 15 05:17:23.959206 kernel: ima: No architecture policies found Jul 15 05:17:23.959212 kernel: clk: Disabling unused clocks Jul 15 05:17:23.959219 kernel: Warning: unable to open an initial console. Jul 15 05:17:23.959228 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 15 05:17:23.959238 kernel: Write protecting the kernel read-only data: 24576k Jul 15 05:17:23.959246 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 05:17:23.959251 kernel: Run /init as init process Jul 15 05:17:23.959257 kernel: with arguments: Jul 15 05:17:23.959263 kernel: /init Jul 15 05:17:23.959269 kernel: with environment: Jul 15 05:17:23.959275 kernel: HOME=/ Jul 15 05:17:23.959280 kernel: TERM=linux Jul 15 05:17:23.959286 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 05:17:23.959293 systemd[1]: Successfully made /usr/ read-only. Jul 15 05:17:23.959302 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:17:23.959314 systemd[1]: Detected virtualization microsoft. Jul 15 05:17:23.959337 systemd[1]: Detected architecture x86-64. Jul 15 05:17:23.959344 systemd[1]: Running in initrd. Jul 15 05:17:23.959349 systemd[1]: No hostname configured, using default hostname. Jul 15 05:17:23.959356 systemd[1]: Hostname set to . Jul 15 05:17:23.959362 systemd[1]: Initializing machine ID from random generator. Jul 15 05:17:23.959368 systemd[1]: Queued start job for default target initrd.target. Jul 15 05:17:23.959374 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:17:23.959380 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:17:23.959392 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 05:17:23.959403 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:17:23.959413 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 05:17:23.959422 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 05:17:23.959429 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 05:17:23.959439 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 05:17:23.959446 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:17:23.959453 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:17:23.959462 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:17:23.959472 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:17:23.959487 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:17:23.959500 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:17:23.959510 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:17:23.959530 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:17:23.959540 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 05:17:23.959550 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 05:17:23.959561 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:17:23.959571 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:17:23.959581 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:17:23.959590 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:17:23.959600 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 05:17:23.959610 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:17:23.959620 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 05:17:23.959630 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 05:17:23.959642 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 05:17:23.959652 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:17:23.959662 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:17:23.959680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:17:23.959691 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 05:17:23.959718 systemd-journald[205]: Collecting audit messages is disabled. Jul 15 05:17:23.959745 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:17:23.959756 systemd-journald[205]: Journal started Jul 15 05:17:23.959781 systemd-journald[205]: Runtime Journal (/run/log/journal/3b9892ebd3ac4eb9bfc1fe8d9328c48d) is 8M, max 158.9M, 150.9M free. Jul 15 05:17:23.966920 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:17:23.966083 systemd-modules-load[206]: Inserted module 'overlay' Jul 15 05:17:23.969671 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 05:17:23.974959 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:17:23.978704 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:17:23.991154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:17:23.996193 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 05:17:24.004197 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 05:17:24.009919 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 05:17:24.009939 kernel: Bridge firewalling registered Jul 15 05:17:24.008915 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:17:24.010494 systemd-modules-load[206]: Inserted module 'br_netfilter' Jul 15 05:17:24.014202 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:17:24.020597 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:17:24.026206 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:17:24.032366 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:17:24.041763 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:17:24.043651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:17:24.048280 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:17:24.056092 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:17:24.062940 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 05:17:24.077924 systemd-resolved[244]: Positive Trust Anchors: Jul 15 05:17:24.078109 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:17:24.078139 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:17:24.096697 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:17:24.080628 systemd-resolved[244]: Defaulting to hostname 'linux'. Jul 15 05:17:24.081444 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:17:24.093964 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:17:24.149866 kernel: SCSI subsystem initialized Jul 15 05:17:24.156861 kernel: Loading iSCSI transport class v2.0-870. Jul 15 05:17:24.164861 kernel: iscsi: registered transport (tcp) Jul 15 05:17:24.180320 kernel: iscsi: registered transport (qla4xxx) Jul 15 05:17:24.180357 kernel: QLogic iSCSI HBA Driver Jul 15 05:17:24.191398 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:17:24.201484 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:17:24.202194 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:17:24.231416 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 05:17:24.233950 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 05:17:24.283865 kernel: raid6: avx512x4 gen() 45123 MB/s Jul 15 05:17:24.300857 kernel: raid6: avx512x2 gen() 45736 MB/s Jul 15 05:17:24.317856 kernel: raid6: avx512x1 gen() 30001 MB/s Jul 15 05:17:24.334855 kernel: raid6: avx2x4 gen() 43933 MB/s Jul 15 05:17:24.352857 kernel: raid6: avx2x2 gen() 44444 MB/s Jul 15 05:17:24.370883 kernel: raid6: avx2x1 gen() 33760 MB/s Jul 15 05:17:24.370898 kernel: raid6: using algorithm avx512x2 gen() 45736 MB/s Jul 15 05:17:24.389221 kernel: raid6: .... xor() 37207 MB/s, rmw enabled Jul 15 05:17:24.389308 kernel: raid6: using avx512x2 recovery algorithm Jul 15 05:17:24.405862 kernel: xor: automatically using best checksumming function avx Jul 15 05:17:24.506865 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 05:17:24.510774 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:17:24.515709 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:17:24.535418 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jul 15 05:17:24.538706 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:17:24.546017 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 05:17:24.562011 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jul 15 05:17:24.578021 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:17:24.579206 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:17:24.617366 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:17:24.620502 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 05:17:24.665386 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 05:17:24.665430 kernel: hv_vmbus: Vmbus version:5.3 Jul 15 05:17:24.680327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:17:24.682096 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:17:24.685043 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 15 05:17:24.686000 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:17:24.689445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:17:24.703109 kernel: AES CTR mode by8 optimization enabled Jul 15 05:17:24.703141 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 05:17:24.705963 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 05:17:24.708278 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 15 05:17:24.716515 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:17:24.716601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:17:24.722907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:17:24.727916 kernel: hv_vmbus: registering driver hv_pci Jul 15 05:17:24.727933 kernel: PTP clock support registered Jul 15 05:17:24.735083 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jul 15 05:17:24.741607 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jul 15 05:17:24.741758 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jul 15 05:17:24.748290 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jul 15 05:17:24.751972 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jul 15 05:17:24.755019 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jul 15 05:17:24.766482 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:17:24.773901 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jul 15 05:17:24.781791 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jul 15 05:17:24.782253 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jul 15 05:17:24.786898 kernel: hv_vmbus: registering driver hv_netvsc Jul 15 05:17:24.790003 kernel: hv_utils: Registering HyperV Utility Driver Jul 15 05:17:24.790040 kernel: hv_vmbus: registering driver hv_utils Jul 15 05:17:24.791579 kernel: hv_utils: Shutdown IC version 3.2 Jul 15 05:17:24.793375 kernel: hv_utils: TimeSync IC version 4.0 Jul 15 05:17:24.795238 kernel: hv_utils: Heartbeat IC version 3.0 Jul 15 05:17:24.377201 systemd-resolved[244]: Clock change detected. Flushing caches. Jul 15 05:17:24.384178 systemd-journald[205]: Time jumped backwards, rotating. Jul 15 05:17:24.384220 kernel: hv_vmbus: registering driver hv_storvsc Jul 15 05:17:24.384230 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 05:17:24.391602 kernel: scsi host0: storvsc_host_t Jul 15 05:17:24.391768 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4786dd (unnamed net_device) (uninitialized): VF slot 1 added Jul 15 05:17:24.391868 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 15 05:17:24.402763 kernel: hv_vmbus: registering driver hid_hyperv Jul 15 05:17:24.406754 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 15 05:17:24.409755 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 15 05:17:24.414642 kernel: nvme nvme0: pci function c05b:00:00.0 Jul 15 05:17:24.414813 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jul 15 05:17:24.679780 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 15 05:17:24.684969 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 05:17:24.689054 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 15 05:17:24.689243 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 05:17:24.690757 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 15 05:17:24.707763 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#218 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 05:17:24.720749 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 05:17:24.936748 kernel: nvme nvme0: using unchecked data buffer Jul 15 05:17:25.136507 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jul 15 05:17:25.169301 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jul 15 05:17:25.182694 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 15 05:17:25.184599 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 15 05:17:25.185632 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 05:17:25.217446 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 15 05:17:25.316242 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 05:17:25.316713 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:17:25.320279 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:17:25.323793 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:17:25.328843 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 05:17:25.350377 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:17:25.409245 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jul 15 05:17:25.409391 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jul 15 05:17:25.411868 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jul 15 05:17:25.413332 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jul 15 05:17:25.417758 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jul 15 05:17:25.420812 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jul 15 05:17:25.425778 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jul 15 05:17:25.425798 kernel: pci 7870:00:00.0: enabling Extended Tags Jul 15 05:17:25.442168 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jul 15 05:17:25.442371 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jul 15 05:17:25.445758 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jul 15 05:17:25.448779 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jul 15 05:17:25.458646 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jul 15 05:17:25.458851 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4786dd eth0: VF registering: eth1 Jul 15 05:17:25.459982 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jul 15 05:17:25.463775 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jul 15 05:17:26.225233 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 05:17:26.225524 disk-uuid[666]: The operation has completed successfully. Jul 15 05:17:26.272473 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 05:17:26.272556 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 05:17:26.300844 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 05:17:26.312483 sh[716]: Success Jul 15 05:17:26.339879 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 05:17:26.339924 kernel: device-mapper: uevent: version 1.0.3 Jul 15 05:17:26.341138 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 05:17:26.348755 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 15 05:17:26.575678 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 05:17:26.582296 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 05:17:26.595283 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 05:17:26.607843 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 05:17:26.607988 kernel: BTRFS: device fsid eb96c768-dac4-4ca9-ae1d-82815d4ce00b devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (729) Jul 15 05:17:26.610094 kernel: BTRFS info (device dm-0): first mount of filesystem eb96c768-dac4-4ca9-ae1d-82815d4ce00b Jul 15 05:17:26.611308 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:17:26.612234 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 05:17:26.942441 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 05:17:26.943121 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:17:26.951820 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 05:17:26.954746 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 05:17:26.959476 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 05:17:26.987031 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (762) Jul 15 05:17:26.987063 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:17:26.987073 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:17:26.990592 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 05:17:27.011773 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:17:27.012023 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 05:17:27.017401 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 05:17:27.031305 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:17:27.035691 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:17:27.070827 systemd-networkd[898]: lo: Link UP Jul 15 05:17:27.070833 systemd-networkd[898]: lo: Gained carrier Jul 15 05:17:27.075812 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 15 05:17:27.072524 systemd-networkd[898]: Enumeration completed Jul 15 05:17:27.080064 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 15 05:17:27.080268 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4786dd eth0: Data path switched to VF: enP30832s1 Jul 15 05:17:27.072967 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:17:27.073038 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:17:27.073041 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:17:27.077850 systemd[1]: Reached target network.target - Network. Jul 15 05:17:27.081300 systemd-networkd[898]: enP30832s1: Link UP Jul 15 05:17:27.081354 systemd-networkd[898]: eth0: Link UP Jul 15 05:17:27.081482 systemd-networkd[898]: eth0: Gained carrier Jul 15 05:17:27.081491 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:17:27.093031 systemd-networkd[898]: enP30832s1: Gained carrier Jul 15 05:17:27.106760 systemd-networkd[898]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 15 05:17:27.846948 ignition[873]: Ignition 2.21.0 Jul 15 05:17:27.846959 ignition[873]: Stage: fetch-offline Jul 15 05:17:27.847208 ignition[873]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:17:27.850239 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:17:27.847216 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 05:17:27.854096 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 05:17:27.848516 ignition[873]: parsed url from cmdline: "" Jul 15 05:17:27.848526 ignition[873]: no config URL provided Jul 15 05:17:27.848540 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:17:27.848549 ignition[873]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:17:27.848554 ignition[873]: failed to fetch config: resource requires networking Jul 15 05:17:27.848768 ignition[873]: Ignition finished successfully Jul 15 05:17:27.885398 ignition[923]: Ignition 2.21.0 Jul 15 05:17:27.885406 ignition[923]: Stage: fetch Jul 15 05:17:27.885561 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:17:27.885568 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 05:17:27.885625 ignition[923]: parsed url from cmdline: "" Jul 15 05:17:27.885627 ignition[923]: no config URL provided Jul 15 05:17:27.885631 ignition[923]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:17:27.885635 ignition[923]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:17:27.885663 ignition[923]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 15 05:17:27.953120 ignition[923]: GET result: OK Jul 15 05:17:27.954028 ignition[923]: config has been read from IMDS userdata Jul 15 05:17:27.954053 ignition[923]: parsing config with SHA512: d1d996e47d12a75d96671a46cc197593de6a4ccb63cd8da7f85e8082e60375aa5f61b401d754f9f3e3d871524eeedc5d7c5b9962ee8d0037f4ea7489834d591f Jul 15 05:17:27.959390 unknown[923]: fetched base config from "system" Jul 15 05:17:27.959398 unknown[923]: fetched base config from "system" Jul 15 05:17:27.959665 ignition[923]: fetch: fetch complete Jul 15 05:17:27.959401 unknown[923]: fetched user config from "azure" Jul 15 05:17:27.959669 ignition[923]: fetch: fetch passed Jul 15 05:17:27.961946 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 05:17:27.959699 ignition[923]: Ignition finished successfully Jul 15 05:17:27.966039 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 05:17:27.997800 ignition[929]: Ignition 2.21.0 Jul 15 05:17:27.997809 ignition[929]: Stage: kargs Jul 15 05:17:27.999664 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 05:17:27.997963 ignition[929]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:17:28.004225 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 05:17:27.997969 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 05:17:27.998564 ignition[929]: kargs: kargs passed Jul 15 05:17:27.998593 ignition[929]: Ignition finished successfully Jul 15 05:17:28.023026 ignition[935]: Ignition 2.21.0 Jul 15 05:17:28.023035 ignition[935]: Stage: disks Jul 15 05:17:28.023180 ignition[935]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:17:28.023186 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 05:17:28.026450 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 05:17:28.025678 ignition[935]: disks: disks passed Jul 15 05:17:28.025718 ignition[935]: Ignition finished successfully Jul 15 05:17:28.028147 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 05:17:28.035835 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 05:17:28.037034 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:17:28.040188 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:17:28.044535 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:17:28.047718 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 05:17:28.129814 systemd-fsck[944]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 15 05:17:28.133716 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 05:17:28.138490 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 05:17:28.389787 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 277c3938-5262-4ab1-8fa3-62fde82f8257 r/w with ordered data mode. Quota mode: none. Jul 15 05:17:28.390316 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 05:17:28.391970 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 05:17:28.408303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:17:28.413706 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 05:17:28.422564 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 15 05:17:28.425066 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 05:17:28.425094 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:17:28.428775 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 05:17:28.430832 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 05:17:28.441935 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (953) Jul 15 05:17:28.442067 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:17:28.443760 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:17:28.446146 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 05:17:28.451908 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:17:28.704864 systemd-networkd[898]: enP30832s1: Gained IPv6LL Jul 15 05:17:28.896854 systemd-networkd[898]: eth0: Gained IPv6LL Jul 15 05:17:28.993647 coreos-metadata[955]: Jul 15 05:17:28.993 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 15 05:17:28.996851 coreos-metadata[955]: Jul 15 05:17:28.995 INFO Fetch successful Jul 15 05:17:28.996851 coreos-metadata[955]: Jul 15 05:17:28.995 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 15 05:17:29.004891 coreos-metadata[955]: Jul 15 05:17:29.004 INFO Fetch successful Jul 15 05:17:29.019057 coreos-metadata[955]: Jul 15 05:17:29.019 INFO wrote hostname ci-4396.0.0-n-11ebebb5c9 to /sysroot/etc/hostname Jul 15 05:17:29.022124 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 15 05:17:29.175179 initrd-setup-root[983]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 05:17:29.208497 initrd-setup-root[990]: cut: /sysroot/etc/group: No such file or directory Jul 15 05:17:29.212403 initrd-setup-root[997]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 05:17:29.232241 initrd-setup-root[1004]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 05:17:30.009052 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 05:17:30.012579 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 05:17:30.016659 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 05:17:30.027890 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 05:17:30.029770 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:17:30.045049 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 05:17:30.051099 ignition[1072]: INFO : Ignition 2.21.0 Jul 15 05:17:30.051099 ignition[1072]: INFO : Stage: mount Jul 15 05:17:30.054822 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:17:30.054822 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 05:17:30.054822 ignition[1072]: INFO : mount: mount passed Jul 15 05:17:30.054822 ignition[1072]: INFO : Ignition finished successfully Jul 15 05:17:30.053236 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 05:17:30.060301 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 05:17:30.087062 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:17:30.100746 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (1084) Jul 15 05:17:30.103193 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:17:30.103219 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:17:30.103229 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 05:17:30.109038 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:17:30.132524 ignition[1101]: INFO : Ignition 2.21.0 Jul 15 05:17:30.132524 ignition[1101]: INFO : Stage: files Jul 15 05:17:30.134938 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:17:30.134938 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 05:17:30.134938 ignition[1101]: DEBUG : files: compiled without relabeling support, skipping Jul 15 05:17:30.160375 ignition[1101]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 05:17:30.160375 ignition[1101]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 05:17:30.206642 ignition[1101]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 05:17:30.209809 ignition[1101]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 05:17:30.209809 ignition[1101]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 05:17:30.208900 unknown[1101]: wrote ssh authorized keys file for user: core Jul 15 05:17:30.218768 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 05:17:30.218768 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 15 05:17:30.521112 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 05:17:30.801572 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 05:17:30.801572 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 05:17:30.807769 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 05:17:30.807769 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:17:30.807769 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:17:30.807769 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:17:30.807769 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:17:30.807769 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:17:30.807769 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:17:30.830765 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:17:30.830765 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:17:30.830765 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:17:30.830765 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:17:30.830765 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:17:30.830765 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 15 05:17:31.413678 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 05:17:34.822071 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:17:34.822071 ignition[1101]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 05:17:34.839366 ignition[1101]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:17:34.846528 ignition[1101]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:17:34.846528 ignition[1101]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 05:17:34.851269 ignition[1101]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 05:17:34.851269 ignition[1101]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 05:17:34.851269 ignition[1101]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:17:34.851269 ignition[1101]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:17:34.851269 ignition[1101]: INFO : files: files passed Jul 15 05:17:34.851269 ignition[1101]: INFO : Ignition finished successfully Jul 15 05:17:34.850838 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 05:17:34.864922 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 05:17:34.871264 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 05:17:34.875888 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 05:17:34.877441 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 05:17:34.896989 initrd-setup-root-after-ignition[1130]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:17:34.896989 initrd-setup-root-after-ignition[1130]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:17:34.916080 initrd-setup-root-after-ignition[1134]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:17:34.900186 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:17:34.902221 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 05:17:34.904843 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 05:17:34.948886 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 05:17:34.948960 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 05:17:34.955023 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 05:17:34.957800 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 05:17:34.959206 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 05:17:34.959819 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 05:17:34.981018 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:17:34.983800 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 05:17:34.996861 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:17:34.999954 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:17:35.003922 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 05:17:35.006876 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 05:17:35.006992 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:17:35.008533 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 05:17:35.008820 systemd[1]: Stopped target basic.target - Basic System. Jul 15 05:17:35.016597 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 05:17:35.022955 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:17:35.026593 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 05:17:35.032787 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:17:35.036348 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 05:17:35.040691 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:17:35.042524 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 05:17:35.045594 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 05:17:35.048590 systemd[1]: Stopped target swap.target - Swaps. Jul 15 05:17:35.052845 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 05:17:35.052970 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:17:35.057109 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:17:35.057449 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:17:35.057729 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 05:17:35.058390 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:17:35.069837 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 05:17:35.069945 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 05:17:35.074079 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 05:17:35.074176 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:17:35.077618 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 05:17:35.077728 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 05:17:35.084305 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 15 05:17:35.084410 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 15 05:17:35.086891 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 05:17:35.091967 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 05:17:35.095063 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 05:17:35.095380 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:17:35.095629 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 05:17:35.095707 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:17:35.098777 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 05:17:35.099873 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 05:17:35.116230 ignition[1154]: INFO : Ignition 2.21.0 Jul 15 05:17:35.116230 ignition[1154]: INFO : Stage: umount Jul 15 05:17:35.116230 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:17:35.116230 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 05:17:35.116230 ignition[1154]: INFO : umount: umount passed Jul 15 05:17:35.118348 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 05:17:35.132224 ignition[1154]: INFO : Ignition finished successfully Jul 15 05:17:35.118432 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 05:17:35.126657 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 05:17:35.126698 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 05:17:35.132188 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 05:17:35.132236 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 05:17:35.134581 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 05:17:35.134617 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 05:17:35.136133 systemd[1]: Stopped target network.target - Network. Jul 15 05:17:35.137396 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 05:17:35.137430 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:17:35.137703 systemd[1]: Stopped target paths.target - Path Units. Jul 15 05:17:35.137722 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 05:17:35.141405 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:17:35.145465 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 05:17:35.149911 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 05:17:35.151968 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 05:17:35.152002 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:17:35.171799 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 05:17:35.171840 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:17:35.174310 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 05:17:35.174352 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 05:17:35.175717 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 05:17:35.175756 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 05:17:35.176278 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 05:17:35.176868 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 05:17:35.177917 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 05:17:35.178336 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 05:17:35.178400 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 05:17:35.178893 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 05:17:35.178961 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 05:17:35.185500 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 05:17:35.185577 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 05:17:35.191868 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 05:17:35.192031 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 05:17:35.192115 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 05:17:35.197010 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 05:17:35.197539 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 05:17:35.217806 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 05:17:35.217846 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:17:35.222342 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 05:17:35.226790 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 05:17:35.226842 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:17:35.229504 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:17:35.229543 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:17:35.232504 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 05:17:35.232536 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 05:17:35.236214 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 05:17:35.236255 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:17:35.244213 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:17:35.250696 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 05:17:35.250769 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:17:35.257101 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 05:17:35.265817 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4786dd eth0: Data path switched from VF: enP30832s1 Jul 15 05:17:35.265964 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 15 05:17:35.259164 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:17:35.264358 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 05:17:35.264403 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 05:17:35.268934 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 05:17:35.268955 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:17:35.277217 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 05:17:35.277265 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:17:35.279971 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 05:17:35.280009 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 05:17:35.280226 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 05:17:35.280258 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:17:35.282839 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 05:17:35.282868 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 05:17:35.282913 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:17:35.285315 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 05:17:35.285358 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:17:35.285746 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 05:17:35.285774 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:17:35.292646 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 05:17:35.294168 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:17:35.298624 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:17:35.298661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:17:35.320462 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 05:17:35.320509 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 05:17:35.320535 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 05:17:35.320566 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:17:35.320830 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 05:17:35.320896 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 05:17:35.322279 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 05:17:35.322337 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 05:17:35.322862 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 05:17:35.325849 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 05:17:35.343412 systemd[1]: Switching root. Jul 15 05:17:35.398284 systemd-journald[205]: Journal stopped Jul 15 05:17:38.597477 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jul 15 05:17:38.597504 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 05:17:38.597515 kernel: SELinux: policy capability open_perms=1 Jul 15 05:17:38.597522 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 05:17:38.597529 kernel: SELinux: policy capability always_check_network=0 Jul 15 05:17:38.597536 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 05:17:38.597546 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 05:17:38.597554 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 05:17:38.597561 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 05:17:38.597568 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 05:17:38.597576 kernel: audit: type=1403 audit(1752556656.314:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 05:17:38.597585 systemd[1]: Successfully loaded SELinux policy in 126.077ms. Jul 15 05:17:38.597594 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.889ms. Jul 15 05:17:38.597605 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:17:38.597614 systemd[1]: Detected virtualization microsoft. Jul 15 05:17:38.597622 systemd[1]: Detected architecture x86-64. Jul 15 05:17:38.597631 systemd[1]: Detected first boot. Jul 15 05:17:38.597639 systemd[1]: Hostname set to . Jul 15 05:17:38.597649 systemd[1]: Initializing machine ID from random generator. Jul 15 05:17:38.597657 zram_generator::config[1197]: No configuration found. Jul 15 05:17:38.597666 kernel: Guest personality initialized and is inactive Jul 15 05:17:38.597674 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jul 15 05:17:38.597682 kernel: Initialized host personality Jul 15 05:17:38.597689 kernel: NET: Registered PF_VSOCK protocol family Jul 15 05:17:38.597697 systemd[1]: Populated /etc with preset unit settings. Jul 15 05:17:38.597707 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 05:17:38.597716 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 05:17:38.597724 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 05:17:38.597754 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 05:17:38.597763 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 05:17:38.597772 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 05:17:38.597780 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 05:17:38.597790 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 05:17:38.597799 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 05:17:38.597807 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 05:17:38.597816 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 05:17:38.597823 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 05:17:38.597832 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:17:38.597841 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:17:38.597849 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 05:17:38.597860 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 05:17:38.597871 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 05:17:38.597880 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:17:38.597889 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 05:17:38.597897 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:17:38.597905 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:17:38.597914 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 05:17:38.597922 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 05:17:38.597932 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 05:17:38.597941 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 05:17:38.597949 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:17:38.597957 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:17:38.597965 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:17:38.597974 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:17:38.597982 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 05:17:38.597991 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 05:17:38.598001 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 05:17:38.598010 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:17:38.598019 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:17:38.598027 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:17:38.598035 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 05:17:38.598045 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 05:17:38.598054 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 05:17:38.598063 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 05:17:38.598071 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:17:38.598080 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 05:17:38.598088 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 05:17:38.598096 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 05:17:38.598105 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 05:17:38.598115 systemd[1]: Reached target machines.target - Containers. Jul 15 05:17:38.598124 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 05:17:38.598132 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:17:38.598141 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:17:38.598150 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 05:17:38.598158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:17:38.598167 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:17:38.598175 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:17:38.598183 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 05:17:38.598194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:17:38.598204 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 05:17:38.598213 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 05:17:38.598222 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 05:17:38.598230 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 05:17:38.598238 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 05:17:38.598247 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:17:38.598256 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:17:38.598267 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:17:38.598275 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:17:38.598284 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 05:17:38.598292 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 05:17:38.598300 kernel: loop: module loaded Jul 15 05:17:38.598308 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:17:38.598316 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 05:17:38.598325 kernel: fuse: init (API version 7.41) Jul 15 05:17:38.598335 systemd[1]: Stopped verity-setup.service. Jul 15 05:17:38.598356 systemd-journald[1283]: Collecting audit messages is disabled. Jul 15 05:17:38.598376 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:17:38.598385 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 05:17:38.598396 systemd-journald[1283]: Journal started Jul 15 05:17:38.598415 systemd-journald[1283]: Runtime Journal (/run/log/journal/9047fd896d6d4bc080879145b3b85089) is 8M, max 158.9M, 150.9M free. Jul 15 05:17:38.205552 systemd[1]: Queued start job for default target multi-user.target. Jul 15 05:17:38.213141 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 15 05:17:38.213486 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 05:17:38.603763 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:17:38.606286 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 05:17:38.608301 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 05:17:38.609915 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 05:17:38.612887 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 05:17:38.615849 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 05:17:38.618009 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 05:17:38.619971 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:17:38.621582 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 05:17:38.621707 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 05:17:38.624984 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:17:38.625125 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:17:38.627444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:17:38.627608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:17:38.631006 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 05:17:38.631148 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 05:17:38.635163 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:17:38.635289 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:17:38.637986 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:17:38.640963 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:17:38.643968 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 05:17:38.652000 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:17:38.656818 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 05:17:38.659749 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 05:17:38.661884 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 05:17:38.661968 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:17:38.666053 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 05:17:38.671851 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 05:17:38.673536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:17:38.675862 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 05:17:38.685696 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 05:17:38.687873 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:17:38.690847 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 05:17:38.692860 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:17:38.693840 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:17:38.697910 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 05:17:38.705926 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:17:38.710783 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 05:17:38.715288 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 05:17:38.717421 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:17:38.720936 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 05:17:38.727815 systemd-journald[1283]: Time spent on flushing to /var/log/journal/9047fd896d6d4bc080879145b3b85089 is 60.890ms for 985 entries. Jul 15 05:17:38.727815 systemd-journald[1283]: System Journal (/var/log/journal/9047fd896d6d4bc080879145b3b85089) is 11.9M, max 2.6G, 2.6G free. Jul 15 05:17:38.874991 systemd-journald[1283]: Received client request to flush runtime journal. Jul 15 05:17:38.875026 kernel: loop0: detected capacity change from 0 to 28624 Jul 15 05:17:38.875041 systemd-journald[1283]: /var/log/journal/9047fd896d6d4bc080879145b3b85089/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 15 05:17:38.875056 kernel: ACPI: bus type drm_connector registered Jul 15 05:17:38.875066 systemd-journald[1283]: Rotating system journal. Jul 15 05:17:38.741098 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 05:17:38.742451 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 05:17:38.746480 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 05:17:38.765172 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:17:38.765372 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:17:38.800755 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jul 15 05:17:38.800762 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jul 15 05:17:38.803965 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:17:38.809672 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 05:17:38.827980 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:17:38.875775 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 05:17:38.879604 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 05:17:38.883990 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:17:38.893871 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 05:17:38.900985 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Jul 15 05:17:38.900999 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Jul 15 05:17:38.903641 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:17:39.105752 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 05:17:39.173776 kernel: loop1: detected capacity change from 0 to 114000 Jul 15 05:17:39.214519 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 05:17:39.518753 kernel: loop2: detected capacity change from 0 to 146488 Jul 15 05:17:39.657071 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 05:17:39.660728 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:17:39.688342 systemd-udevd[1366]: Using default interface naming scheme 'v255'. Jul 15 05:17:39.754093 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:17:39.758857 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:17:39.814839 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 05:17:39.840708 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 05:17:39.861753 kernel: loop3: detected capacity change from 0 to 224512 Jul 15 05:17:39.901850 kernel: loop4: detected capacity change from 0 to 28624 Jul 15 05:17:39.916611 kernel: loop5: detected capacity change from 0 to 114000 Jul 15 05:17:39.914175 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 05:17:39.918758 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 05:17:39.924046 kernel: hv_vmbus: registering driver hyperv_fb Jul 15 05:17:39.925846 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 15 05:17:39.928869 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 15 05:17:39.930785 kernel: Console: switching to colour dummy device 80x25 Jul 15 05:17:39.937041 kernel: Console: switching to colour frame buffer device 128x48 Jul 15 05:17:39.937779 kernel: loop6: detected capacity change from 0 to 146488 Jul 15 05:17:39.952788 kernel: hv_vmbus: registering driver hv_balloon Jul 15 05:17:39.955758 kernel: loop7: detected capacity change from 0 to 224512 Jul 15 05:17:39.967756 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 15 05:17:39.982791 (sd-merge)[1412]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 15 05:17:39.986878 (sd-merge)[1412]: Merged extensions into '/usr'. Jul 15 05:17:39.995147 systemd[1]: Reload requested from client PID 1336 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 05:17:39.995160 systemd[1]: Reloading... Jul 15 05:17:40.058135 systemd-networkd[1374]: lo: Link UP Jul 15 05:17:40.058146 systemd-networkd[1374]: lo: Gained carrier Jul 15 05:17:40.062464 systemd-networkd[1374]: Enumeration completed Jul 15 05:17:40.062719 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:17:40.062729 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:17:40.066793 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 15 05:17:40.071781 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 15 05:17:40.075782 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4786dd eth0: Data path switched to VF: enP30832s1 Jul 15 05:17:40.078215 systemd-networkd[1374]: enP30832s1: Link UP Jul 15 05:17:40.078280 systemd-networkd[1374]: eth0: Link UP Jul 15 05:17:40.078282 systemd-networkd[1374]: eth0: Gained carrier Jul 15 05:17:40.078297 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:17:40.085877 zram_generator::config[1464]: No configuration found. Jul 15 05:17:40.084942 systemd-networkd[1374]: enP30832s1: Gained carrier Jul 15 05:17:40.091286 systemd-networkd[1374]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 15 05:17:40.169702 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#218 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 05:17:40.275391 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:17:40.285748 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 15 05:17:40.365941 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 15 05:17:40.369193 systemd[1]: Reloading finished in 373 ms. Jul 15 05:17:40.388083 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:17:40.390977 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 05:17:40.422431 systemd[1]: Starting ensure-sysext.service... Jul 15 05:17:40.425643 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 05:17:40.428590 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 05:17:40.431903 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 05:17:40.437932 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:17:40.447831 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:17:40.460615 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 05:17:40.460637 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 05:17:40.460954 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 05:17:40.461143 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 05:17:40.461712 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 05:17:40.461928 systemd-tmpfiles[1542]: ACLs are not supported, ignoring. Jul 15 05:17:40.461967 systemd-tmpfiles[1542]: ACLs are not supported, ignoring. Jul 15 05:17:40.465710 systemd-tmpfiles[1542]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:17:40.465717 systemd-tmpfiles[1542]: Skipping /boot Jul 15 05:17:40.466862 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 05:17:40.467917 systemd[1]: Reload requested from client PID 1538 ('systemctl') (unit ensure-sysext.service)... Jul 15 05:17:40.467976 systemd[1]: Reloading... Jul 15 05:17:40.476017 systemd-tmpfiles[1542]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:17:40.476082 systemd-tmpfiles[1542]: Skipping /boot Jul 15 05:17:40.527758 zram_generator::config[1583]: No configuration found. Jul 15 05:17:40.597206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:17:40.676591 systemd[1]: Reloading finished in 208 ms. Jul 15 05:17:40.690924 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 05:17:40.691331 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:17:40.696848 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:17:40.702044 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 05:17:40.706518 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 05:17:40.708232 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:17:40.711943 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 05:17:40.720275 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:17:40.720414 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:17:40.723941 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:17:40.727974 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:17:40.733842 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:17:40.736087 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:17:40.736210 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:17:40.736307 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:17:40.738479 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:17:40.741297 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:17:40.744928 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:17:40.749250 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:17:40.749478 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:17:40.753601 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:17:40.753899 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:17:40.763435 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 05:17:40.767450 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:17:40.767624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:17:40.769858 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:17:40.774551 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:17:40.781424 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:17:40.783899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:17:40.784010 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:17:40.784091 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:17:40.788715 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:17:40.789217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:17:40.793203 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:17:40.793347 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:17:40.793429 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:17:40.793552 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 05:17:40.793729 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:17:40.799083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:17:40.799215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:17:40.805011 systemd[1]: Finished ensure-sysext.service. Jul 15 05:17:40.807748 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:17:40.808122 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:17:40.811139 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:17:40.812906 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:17:40.815310 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 05:17:40.825998 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:17:40.826122 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:17:40.829059 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:17:40.829123 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:17:40.830148 systemd-resolved[1645]: Positive Trust Anchors: Jul 15 05:17:40.830156 systemd-resolved[1645]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:17:40.830184 systemd-resolved[1645]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:17:40.846605 systemd-resolved[1645]: Using system hostname 'ci-4396.0.0-n-11ebebb5c9'. Jul 15 05:17:40.847795 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:17:40.850892 systemd[1]: Reached target network.target - Network. Jul 15 05:17:40.853320 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:17:40.854596 augenrules[1687]: No rules Jul 15 05:17:40.856931 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:17:40.857087 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:17:41.133471 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 05:17:41.135327 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 05:17:41.248808 systemd-networkd[1374]: eth0: Gained IPv6LL Jul 15 05:17:41.250308 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 05:17:41.252156 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 05:17:41.376882 systemd-networkd[1374]: enP30832s1: Gained IPv6LL Jul 15 05:17:43.133526 ldconfig[1331]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 05:17:43.151968 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 05:17:43.154587 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 05:17:43.176927 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 05:17:43.178379 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:17:43.179671 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 05:17:43.181283 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 05:17:43.183886 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 05:17:43.185429 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 05:17:43.187837 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 05:17:43.190783 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 05:17:43.193785 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 05:17:43.193816 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:17:43.195806 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:17:43.197921 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 05:17:43.201604 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 05:17:43.206188 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 05:17:43.208894 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 05:17:43.210484 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 05:17:43.212922 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 05:17:43.216006 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 05:17:43.219232 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 05:17:43.222385 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:17:43.223547 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:17:43.224697 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:17:43.224723 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:17:43.226379 systemd[1]: Starting chronyd.service - NTP client/server... Jul 15 05:17:43.229841 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 05:17:43.237368 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 05:17:43.242867 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 05:17:43.245858 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 05:17:43.252279 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 05:17:43.256192 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 05:17:43.258480 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 05:17:43.260640 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 05:17:43.262631 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jul 15 05:17:43.265126 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 15 05:17:43.267195 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 15 05:17:43.274852 jq[1705]: false Jul 15 05:17:43.274909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:17:43.281234 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 05:17:43.282336 KVP[1711]: KVP starting; pid is:1711 Jul 15 05:17:43.288629 KVP[1711]: KVP LIC Version: 3.1 Jul 15 05:17:43.288753 kernel: hv_utils: KVP IC version 4.0 Jul 15 05:17:43.289539 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 05:17:43.294352 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 05:17:43.299141 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 05:17:43.304467 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 05:17:43.309278 google_oslogin_nss_cache[1710]: oslogin_cache_refresh[1710]: Refreshing passwd entry cache Jul 15 05:17:43.309445 extend-filesystems[1708]: Found /dev/nvme0n1p6 Jul 15 05:17:43.312229 oslogin_cache_refresh[1710]: Refreshing passwd entry cache Jul 15 05:17:43.314692 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 05:17:43.317515 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 05:17:43.319143 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 05:17:43.320823 (chronyd)[1700]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 15 05:17:43.321306 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 05:17:43.324708 google_oslogin_nss_cache[1710]: oslogin_cache_refresh[1710]: Failure getting users, quitting Jul 15 05:17:43.324708 google_oslogin_nss_cache[1710]: oslogin_cache_refresh[1710]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:17:43.324700 oslogin_cache_refresh[1710]: Failure getting users, quitting Jul 15 05:17:43.324884 google_oslogin_nss_cache[1710]: oslogin_cache_refresh[1710]: Refreshing group entry cache Jul 15 05:17:43.324717 oslogin_cache_refresh[1710]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:17:43.324776 oslogin_cache_refresh[1710]: Refreshing group entry cache Jul 15 05:17:43.327326 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 05:17:43.331814 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 05:17:43.332372 extend-filesystems[1708]: Found /dev/nvme0n1p9 Jul 15 05:17:43.334437 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 05:17:43.336452 extend-filesystems[1708]: Checking size of /dev/nvme0n1p9 Jul 15 05:17:43.339529 google_oslogin_nss_cache[1710]: oslogin_cache_refresh[1710]: Failure getting groups, quitting Jul 15 05:17:43.339529 google_oslogin_nss_cache[1710]: oslogin_cache_refresh[1710]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:17:43.338909 oslogin_cache_refresh[1710]: Failure getting groups, quitting Jul 15 05:17:43.338918 oslogin_cache_refresh[1710]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:17:43.340387 chronyd[1734]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 15 05:17:43.340917 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 05:17:43.341278 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 05:17:43.341419 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 05:17:43.345453 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 05:17:43.346504 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 05:17:43.358325 jq[1727]: true Jul 15 05:17:43.364873 systemd[1]: Started chronyd.service - NTP client/server. Jul 15 05:17:43.362869 chronyd[1734]: Timezone right/UTC failed leap second check, ignoring Jul 15 05:17:43.362996 chronyd[1734]: Loaded seccomp filter (level 2) Jul 15 05:17:43.368088 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 05:17:43.368253 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 05:17:43.382265 extend-filesystems[1708]: Old size kept for /dev/nvme0n1p9 Jul 15 05:17:43.383613 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 05:17:43.384507 (ntainerd)[1750]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 05:17:43.386345 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 05:17:43.401474 jq[1747]: true Jul 15 05:17:43.409011 update_engine[1725]: I20250715 05:17:43.407351 1725 main.cc:92] Flatcar Update Engine starting Jul 15 05:17:43.409169 tar[1737]: linux-amd64/LICENSE Jul 15 05:17:43.409169 tar[1737]: linux-amd64/helm Jul 15 05:17:43.435320 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 05:17:43.478475 systemd-logind[1724]: New seat seat0. Jul 15 05:17:43.481490 systemd-logind[1724]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 05:17:43.481610 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 05:17:43.492762 dbus-daemon[1703]: [system] SELinux support is enabled Jul 15 05:17:43.492872 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 05:17:43.498142 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 05:17:43.498172 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 05:17:43.502697 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 05:17:43.507844 update_engine[1725]: I20250715 05:17:43.502712 1725 update_check_scheduler.cc:74] Next update check in 5m45s Jul 15 05:17:43.502714 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 05:17:43.506069 systemd[1]: Started update-engine.service - Update Engine. Jul 15 05:17:43.508671 dbus-daemon[1703]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 05:17:43.510304 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 05:17:43.524755 bash[1786]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:17:43.527160 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 05:17:43.532288 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 05:17:43.590961 coreos-metadata[1702]: Jul 15 05:17:43.590 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 15 05:17:43.596770 coreos-metadata[1702]: Jul 15 05:17:43.595 INFO Fetch successful Jul 15 05:17:43.596770 coreos-metadata[1702]: Jul 15 05:17:43.595 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 15 05:17:43.599316 coreos-metadata[1702]: Jul 15 05:17:43.599 INFO Fetch successful Jul 15 05:17:43.599316 coreos-metadata[1702]: Jul 15 05:17:43.599 INFO Fetching http://168.63.129.16/machine/a7fff4a7-2f0e-44e4-a51e-43049b1a188c/656e6cbc%2D33cf%2D4661%2Da902%2D84f46c1fe06e.%5Fci%2D4396.0.0%2Dn%2D11ebebb5c9?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 15 05:17:43.602587 coreos-metadata[1702]: Jul 15 05:17:43.601 INFO Fetch successful Jul 15 05:17:43.604752 coreos-metadata[1702]: Jul 15 05:17:43.603 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 15 05:17:43.612395 coreos-metadata[1702]: Jul 15 05:17:43.612 INFO Fetch successful Jul 15 05:17:43.663827 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 05:17:43.667479 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 05:17:43.697960 sshd_keygen[1764]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 05:17:43.726286 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 05:17:43.733681 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 05:17:43.737755 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 15 05:17:43.765903 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 05:17:43.766062 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 05:17:43.774832 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 05:17:43.787208 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 15 05:17:43.809192 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 05:17:43.814005 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 05:17:43.816333 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 05:17:43.818702 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 05:17:43.947384 locksmithd[1799]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 05:17:43.977597 tar[1737]: linux-amd64/README.md Jul 15 05:17:43.993648 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 05:17:44.237439 containerd[1750]: time="2025-07-15T05:17:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 05:17:44.237439 containerd[1750]: time="2025-07-15T05:17:44.236665123Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 05:17:44.245450 containerd[1750]: time="2025-07-15T05:17:44.245417422Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.507µs" Jul 15 05:17:44.245450 containerd[1750]: time="2025-07-15T05:17:44.245444315Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 05:17:44.245529 containerd[1750]: time="2025-07-15T05:17:44.245461020Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 05:17:44.245588 containerd[1750]: time="2025-07-15T05:17:44.245574440Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 05:17:44.245609 containerd[1750]: time="2025-07-15T05:17:44.245587519Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 05:17:44.245624 containerd[1750]: time="2025-07-15T05:17:44.245606339Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:17:44.245661 containerd[1750]: time="2025-07-15T05:17:44.245648650Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:17:44.245661 containerd[1750]: time="2025-07-15T05:17:44.245657619Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:17:44.245864 containerd[1750]: time="2025-07-15T05:17:44.245846647Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:17:44.245864 containerd[1750]: time="2025-07-15T05:17:44.245859651Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:17:44.245903 containerd[1750]: time="2025-07-15T05:17:44.245874779Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:17:44.245903 containerd[1750]: time="2025-07-15T05:17:44.245882682Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 05:17:44.245960 containerd[1750]: time="2025-07-15T05:17:44.245947216Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 05:17:44.246470 containerd[1750]: time="2025-07-15T05:17:44.246073734Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:17:44.246470 containerd[1750]: time="2025-07-15T05:17:44.246094856Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:17:44.246470 containerd[1750]: time="2025-07-15T05:17:44.246103294Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 05:17:44.246470 containerd[1750]: time="2025-07-15T05:17:44.246126927Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 05:17:44.246470 containerd[1750]: time="2025-07-15T05:17:44.246318369Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 05:17:44.246470 containerd[1750]: time="2025-07-15T05:17:44.246352709Z" level=info msg="metadata content store policy set" policy=shared Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.261880038Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.261926625Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.261941184Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.261952425Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.261965263Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.261974912Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.261992767Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.262003684Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.262013293Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.262022155Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.262030052Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.262040846Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.262132439Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 05:17:44.262273 containerd[1750]: time="2025-07-15T05:17:44.262146114Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 05:17:44.262535 containerd[1750]: time="2025-07-15T05:17:44.262168941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 05:17:44.262535 containerd[1750]: time="2025-07-15T05:17:44.262179460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 05:17:44.262535 containerd[1750]: time="2025-07-15T05:17:44.262188509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 05:17:44.262535 containerd[1750]: time="2025-07-15T05:17:44.262197263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 05:17:44.262535 containerd[1750]: time="2025-07-15T05:17:44.262210855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 05:17:44.262535 containerd[1750]: time="2025-07-15T05:17:44.262219582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 05:17:44.262535 containerd[1750]: time="2025-07-15T05:17:44.262228621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 05:17:44.262535 containerd[1750]: time="2025-07-15T05:17:44.262237645Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 05:17:44.262535 containerd[1750]: time="2025-07-15T05:17:44.262246475Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 05:17:44.262535 containerd[1750]: time="2025-07-15T05:17:44.262304775Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 05:17:44.262535 containerd[1750]: time="2025-07-15T05:17:44.262316278Z" level=info msg="Start snapshots syncer" Jul 15 05:17:44.262535 containerd[1750]: time="2025-07-15T05:17:44.262337238Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 05:17:44.262725 containerd[1750]: time="2025-07-15T05:17:44.262538118Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 05:17:44.262725 containerd[1750]: time="2025-07-15T05:17:44.262577527Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 05:17:44.262860 containerd[1750]: time="2025-07-15T05:17:44.262624032Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 05:17:44.262860 containerd[1750]: time="2025-07-15T05:17:44.262691600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 05:17:44.262860 containerd[1750]: time="2025-07-15T05:17:44.262712467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 05:17:44.262860 containerd[1750]: time="2025-07-15T05:17:44.262723442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 05:17:44.262860 containerd[1750]: time="2025-07-15T05:17:44.262766549Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 05:17:44.262860 containerd[1750]: time="2025-07-15T05:17:44.262783884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 05:17:44.262860 containerd[1750]: time="2025-07-15T05:17:44.262793153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 05:17:44.262860 containerd[1750]: time="2025-07-15T05:17:44.262802141Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 05:17:44.262860 containerd[1750]: time="2025-07-15T05:17:44.262824252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 05:17:44.262860 containerd[1750]: time="2025-07-15T05:17:44.262832050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 05:17:44.262860 containerd[1750]: time="2025-07-15T05:17:44.262842296Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 05:17:44.263029 containerd[1750]: time="2025-07-15T05:17:44.262862492Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:17:44.263029 containerd[1750]: time="2025-07-15T05:17:44.262878645Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:17:44.263029 containerd[1750]: time="2025-07-15T05:17:44.262887235Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:17:44.263029 containerd[1750]: time="2025-07-15T05:17:44.262896476Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:17:44.263029 containerd[1750]: time="2025-07-15T05:17:44.262904145Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 05:17:44.263029 containerd[1750]: time="2025-07-15T05:17:44.262915438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 05:17:44.263029 containerd[1750]: time="2025-07-15T05:17:44.262924556Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 05:17:44.263029 containerd[1750]: time="2025-07-15T05:17:44.262938545Z" level=info msg="runtime interface created" Jul 15 05:17:44.263029 containerd[1750]: time="2025-07-15T05:17:44.262942717Z" level=info msg="created NRI interface" Jul 15 05:17:44.263029 containerd[1750]: time="2025-07-15T05:17:44.262950075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 05:17:44.263029 containerd[1750]: time="2025-07-15T05:17:44.262959421Z" level=info msg="Connect containerd service" Jul 15 05:17:44.263029 containerd[1750]: time="2025-07-15T05:17:44.262978873Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 05:17:44.263750 containerd[1750]: time="2025-07-15T05:17:44.263565680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:17:44.482154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:17:44.489992 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:17:44.966561 kubelet[1866]: E0715 05:17:44.966480 1866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:17:44.968457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:17:44.968569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:17:44.968979 systemd[1]: kubelet.service: Consumed 815ms CPU time, 265.1M memory peak. Jul 15 05:17:45.065838 waagent[1837]: 2025-07-15T05:17:45.065769Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 15 05:17:45.068839 waagent[1837]: 2025-07-15T05:17:45.068800Z INFO Daemon Daemon OS: flatcar 4396.0.0 Jul 15 05:17:45.070819 waagent[1837]: 2025-07-15T05:17:45.070789Z INFO Daemon Daemon Python: 3.11.13 Jul 15 05:17:45.072942 waagent[1837]: 2025-07-15T05:17:45.072899Z INFO Daemon Daemon Run daemon Jul 15 05:17:45.074947 waagent[1837]: 2025-07-15T05:17:45.074906Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4396.0.0' Jul 15 05:17:45.077872 waagent[1837]: 2025-07-15T05:17:45.077816Z INFO Daemon Daemon Using waagent for provisioning Jul 15 05:17:45.080928 waagent[1837]: 2025-07-15T05:17:45.080890Z INFO Daemon Daemon Activate resource disk Jul 15 05:17:45.083870 waagent[1837]: 2025-07-15T05:17:45.083822Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 15 05:17:45.086846 waagent[1837]: 2025-07-15T05:17:45.086808Z INFO Daemon Daemon Found device: None Jul 15 05:17:45.088011 waagent[1837]: 2025-07-15T05:17:45.087983Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 15 05:17:45.091825 waagent[1837]: 2025-07-15T05:17:45.091790Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 15 05:17:45.097216 waagent[1837]: 2025-07-15T05:17:45.097168Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 15 05:17:45.098568 waagent[1837]: 2025-07-15T05:17:45.098532Z INFO Daemon Daemon Running default provisioning handler Jul 15 05:17:45.104933 waagent[1837]: 2025-07-15T05:17:45.104890Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 15 05:17:45.109125 waagent[1837]: 2025-07-15T05:17:45.109085Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 15 05:17:45.111402 waagent[1837]: 2025-07-15T05:17:45.111372Z INFO Daemon Daemon cloud-init is enabled: False Jul 15 05:17:45.114120 waagent[1837]: 2025-07-15T05:17:45.113943Z INFO Daemon Daemon Copying ovf-env.xml Jul 15 05:17:45.124960 containerd[1750]: time="2025-07-15T05:17:45.124883656Z" level=info msg="Start subscribing containerd event" Jul 15 05:17:45.124960 containerd[1750]: time="2025-07-15T05:17:45.124930804Z" level=info msg="Start recovering state" Jul 15 05:17:45.125298 containerd[1750]: time="2025-07-15T05:17:45.125027439Z" level=info msg="Start event monitor" Jul 15 05:17:45.125298 containerd[1750]: time="2025-07-15T05:17:45.125040335Z" level=info msg="Start cni network conf syncer for default" Jul 15 05:17:45.125298 containerd[1750]: time="2025-07-15T05:17:45.125049012Z" level=info msg="Start streaming server" Jul 15 05:17:45.125298 containerd[1750]: time="2025-07-15T05:17:45.125066188Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 05:17:45.125298 containerd[1750]: time="2025-07-15T05:17:45.125073377Z" level=info msg="runtime interface starting up..." Jul 15 05:17:45.125298 containerd[1750]: time="2025-07-15T05:17:45.125080462Z" level=info msg="starting plugins..." Jul 15 05:17:45.125298 containerd[1750]: time="2025-07-15T05:17:45.125099412Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 05:17:45.125298 containerd[1750]: time="2025-07-15T05:17:45.125112442Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 05:17:45.125298 containerd[1750]: time="2025-07-15T05:17:45.125144646Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 05:17:45.125298 containerd[1750]: time="2025-07-15T05:17:45.125181844Z" level=info msg="containerd successfully booted in 0.889254s" Jul 15 05:17:45.125357 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 05:17:45.128085 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 05:17:45.131037 systemd[1]: Startup finished in 2.852s (kernel) + 12.897s (initrd) + 8.942s (userspace) = 24.692s. Jul 15 05:17:45.210522 waagent[1837]: 2025-07-15T05:17:45.210479Z INFO Daemon Daemon Successfully mounted dvd Jul 15 05:17:45.248762 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 15 05:17:45.249471 waagent[1837]: 2025-07-15T05:17:45.249432Z INFO Daemon Daemon Detect protocol endpoint Jul 15 05:17:45.249660 waagent[1837]: 2025-07-15T05:17:45.249635Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 15 05:17:45.249918 waagent[1837]: 2025-07-15T05:17:45.249898Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 15 05:17:45.250198 waagent[1837]: 2025-07-15T05:17:45.250183Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 15 05:17:45.250605 waagent[1837]: 2025-07-15T05:17:45.250587Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 15 05:17:45.250902 waagent[1837]: 2025-07-15T05:17:45.250886Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 15 05:17:45.264835 waagent[1837]: 2025-07-15T05:17:45.264807Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 15 05:17:45.265966 waagent[1837]: 2025-07-15T05:17:45.265054Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 15 05:17:45.265966 waagent[1837]: 2025-07-15T05:17:45.265175Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 15 05:17:45.312340 login[1839]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 15 05:17:45.313805 login[1840]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 15 05:17:45.334391 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 05:17:45.338146 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 05:17:45.342328 systemd-logind[1724]: New session 1 of user core. Jul 15 05:17:45.345689 systemd-logind[1724]: New session 2 of user core. Jul 15 05:17:45.357056 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 05:17:45.359844 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 05:17:45.370303 waagent[1837]: 2025-07-15T05:17:45.370256Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 15 05:17:45.372569 waagent[1837]: 2025-07-15T05:17:45.370412Z INFO Daemon Daemon Forcing an update of the goal state. Jul 15 05:17:45.374082 waagent[1837]: 2025-07-15T05:17:45.374058Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 15 05:17:45.384926 (systemd)[1895]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 05:17:45.386260 systemd-logind[1724]: New session c1 of user core. Jul 15 05:17:45.391754 waagent[1837]: 2025-07-15T05:17:45.389369Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 15 05:17:45.392138 waagent[1837]: 2025-07-15T05:17:45.392106Z INFO Daemon Jul 15 05:17:45.393389 waagent[1837]: 2025-07-15T05:17:45.393344Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b4b71fcb-1914-4a46-9f33-56d99faaad71 eTag: 11364139886923066689 source: Fabric] Jul 15 05:17:45.395786 waagent[1837]: 2025-07-15T05:17:45.395755Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 15 05:17:45.397622 waagent[1837]: 2025-07-15T05:17:45.397452Z INFO Daemon Jul 15 05:17:45.398462 waagent[1837]: 2025-07-15T05:17:45.398383Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 15 05:17:45.407934 waagent[1837]: 2025-07-15T05:17:45.407310Z INFO Daemon Daemon Downloading artifacts profile blob Jul 15 05:17:45.492269 waagent[1837]: 2025-07-15T05:17:45.492225Z INFO Daemon Downloaded certificate {'thumbprint': '143EF3980F616EA0B645B5EE30F4A8DF0D43C0B8', 'hasPrivateKey': True} Jul 15 05:17:45.495758 waagent[1837]: 2025-07-15T05:17:45.495426Z INFO Daemon Fetch goal state completed Jul 15 05:17:45.499478 systemd[1895]: Queued start job for default target default.target. Jul 15 05:17:45.501878 waagent[1837]: 2025-07-15T05:17:45.501856Z INFO Daemon Daemon Starting provisioning Jul 15 05:17:45.502147 waagent[1837]: 2025-07-15T05:17:45.501992Z INFO Daemon Daemon Handle ovf-env.xml. Jul 15 05:17:45.502725 waagent[1837]: 2025-07-15T05:17:45.502151Z INFO Daemon Daemon Set hostname [ci-4396.0.0-n-11ebebb5c9] Jul 15 05:17:45.504466 systemd[1895]: Created slice app.slice - User Application Slice. Jul 15 05:17:45.504486 systemd[1895]: Reached target paths.target - Paths. Jul 15 05:17:45.504653 systemd[1895]: Reached target timers.target - Timers. Jul 15 05:17:45.505995 systemd[1895]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 05:17:45.512537 systemd[1895]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 05:17:45.512589 systemd[1895]: Reached target sockets.target - Sockets. Jul 15 05:17:45.512618 systemd[1895]: Reached target basic.target - Basic System. Jul 15 05:17:45.512670 systemd[1895]: Reached target default.target - Main User Target. Jul 15 05:17:45.512689 systemd[1895]: Startup finished in 122ms. Jul 15 05:17:45.512841 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 05:17:45.521845 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 05:17:45.522757 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 05:17:45.527214 waagent[1837]: 2025-07-15T05:17:45.525789Z INFO Daemon Daemon Publish hostname [ci-4396.0.0-n-11ebebb5c9] Jul 15 05:17:45.527214 waagent[1837]: 2025-07-15T05:17:45.527416Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 15 05:17:45.527214 waagent[1837]: 2025-07-15T05:17:45.527639Z INFO Daemon Daemon Primary interface is [eth0] Jul 15 05:17:45.535152 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:17:45.535159 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:17:45.536136 waagent[1837]: 2025-07-15T05:17:45.535712Z INFO Daemon Daemon Create user account if not exists Jul 15 05:17:45.536136 waagent[1837]: 2025-07-15T05:17:45.535908Z INFO Daemon Daemon User core already exists, skip useradd Jul 15 05:17:45.536136 waagent[1837]: 2025-07-15T05:17:45.536047Z INFO Daemon Daemon Configure sudoer Jul 15 05:17:45.535178 systemd-networkd[1374]: eth0: DHCP lease lost Jul 15 05:17:45.544336 waagent[1837]: 2025-07-15T05:17:45.544290Z INFO Daemon Daemon Configure sshd Jul 15 05:17:45.550422 waagent[1837]: 2025-07-15T05:17:45.549058Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 15 05:17:45.550422 waagent[1837]: 2025-07-15T05:17:45.549176Z INFO Daemon Daemon Deploy ssh public key. Jul 15 05:17:45.554798 systemd-networkd[1374]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 15 05:17:46.637750 waagent[1837]: 2025-07-15T05:17:46.637667Z INFO Daemon Daemon Provisioning complete Jul 15 05:17:46.648753 waagent[1837]: 2025-07-15T05:17:46.648716Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 15 05:17:46.650073 waagent[1837]: 2025-07-15T05:17:46.650042Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 15 05:17:46.652150 waagent[1837]: 2025-07-15T05:17:46.652120Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 15 05:17:46.741402 waagent[1939]: 2025-07-15T05:17:46.741344Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 15 05:17:46.741618 waagent[1939]: 2025-07-15T05:17:46.741426Z INFO ExtHandler ExtHandler OS: flatcar 4396.0.0 Jul 15 05:17:46.741618 waagent[1939]: 2025-07-15T05:17:46.741461Z INFO ExtHandler ExtHandler Python: 3.11.13 Jul 15 05:17:46.741618 waagent[1939]: 2025-07-15T05:17:46.741495Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jul 15 05:17:46.763373 waagent[1939]: 2025-07-15T05:17:46.763331Z INFO ExtHandler ExtHandler Distro: flatcar-4396.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 15 05:17:46.763479 waagent[1939]: 2025-07-15T05:17:46.763458Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 05:17:46.763518 waagent[1939]: 2025-07-15T05:17:46.763503Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 05:17:46.774194 waagent[1939]: 2025-07-15T05:17:46.774148Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 15 05:17:46.777589 waagent[1939]: 2025-07-15T05:17:46.777559Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 15 05:17:46.777900 waagent[1939]: 2025-07-15T05:17:46.777874Z INFO ExtHandler Jul 15 05:17:46.777941 waagent[1939]: 2025-07-15T05:17:46.777922Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1eab4a42-4e9f-4504-879e-917ed2daad54 eTag: 11364139886923066689 source: Fabric] Jul 15 05:17:46.778107 waagent[1939]: 2025-07-15T05:17:46.778084Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 15 05:17:46.778394 waagent[1939]: 2025-07-15T05:17:46.778371Z INFO ExtHandler Jul 15 05:17:46.778440 waagent[1939]: 2025-07-15T05:17:46.778409Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 15 05:17:46.781825 waagent[1939]: 2025-07-15T05:17:46.781798Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 15 05:17:46.840647 waagent[1939]: 2025-07-15T05:17:46.840603Z INFO ExtHandler Downloaded certificate {'thumbprint': '143EF3980F616EA0B645B5EE30F4A8DF0D43C0B8', 'hasPrivateKey': True} Jul 15 05:17:46.840956 waagent[1939]: 2025-07-15T05:17:46.840931Z INFO ExtHandler Fetch goal state completed Jul 15 05:17:46.855325 waagent[1939]: 2025-07-15T05:17:46.855285Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025) Jul 15 05:17:46.858697 waagent[1939]: 2025-07-15T05:17:46.858652Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1939 Jul 15 05:17:46.858814 waagent[1939]: 2025-07-15T05:17:46.858777Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 15 05:17:46.859025 waagent[1939]: 2025-07-15T05:17:46.859003Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 15 05:17:46.859892 waagent[1939]: 2025-07-15T05:17:46.859867Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4396.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jul 15 05:17:46.860131 waagent[1939]: 2025-07-15T05:17:46.860110Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4396.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 15 05:17:46.860220 waagent[1939]: 2025-07-15T05:17:46.860203Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 15 05:17:46.860554 waagent[1939]: 2025-07-15T05:17:46.860534Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 15 05:17:46.878217 waagent[1939]: 2025-07-15T05:17:46.878194Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 15 05:17:46.878335 waagent[1939]: 2025-07-15T05:17:46.878317Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 15 05:17:46.883069 waagent[1939]: 2025-07-15T05:17:46.882922Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 15 05:17:46.887372 systemd[1]: Reload requested from client PID 1954 ('systemctl') (unit waagent.service)... Jul 15 05:17:46.887382 systemd[1]: Reloading... Jul 15 05:17:46.949770 zram_generator::config[1997]: No configuration found. Jul 15 05:17:47.020361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:17:47.101443 systemd[1]: Reloading finished in 213 ms. Jul 15 05:17:47.115579 waagent[1939]: 2025-07-15T05:17:47.115534Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 15 05:17:47.115645 waagent[1939]: 2025-07-15T05:17:47.115625Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 15 05:17:47.271150 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#206 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jul 15 05:17:47.506595 waagent[1939]: 2025-07-15T05:17:47.506552Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 15 05:17:47.506827 waagent[1939]: 2025-07-15T05:17:47.506804Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 15 05:17:47.507371 waagent[1939]: 2025-07-15T05:17:47.507327Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 15 05:17:47.507424 waagent[1939]: 2025-07-15T05:17:47.507404Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 05:17:47.507480 waagent[1939]: 2025-07-15T05:17:47.507463Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 05:17:47.507639 waagent[1939]: 2025-07-15T05:17:47.507621Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 15 05:17:47.507990 waagent[1939]: 2025-07-15T05:17:47.507964Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 15 05:17:47.508050 waagent[1939]: 2025-07-15T05:17:47.508030Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 05:17:47.508127 waagent[1939]: 2025-07-15T05:17:47.508095Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 05:17:47.508222 waagent[1939]: 2025-07-15T05:17:47.508205Z INFO EnvHandler ExtHandler Configure routes Jul 15 05:17:47.508328 waagent[1939]: 2025-07-15T05:17:47.508306Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 15 05:17:47.508328 waagent[1939]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 15 05:17:47.508328 waagent[1939]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 15 05:17:47.508328 waagent[1939]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 15 05:17:47.508328 waagent[1939]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 15 05:17:47.508328 waagent[1939]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 15 05:17:47.508328 waagent[1939]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 15 05:17:47.508542 waagent[1939]: 2025-07-15T05:17:47.508523Z INFO EnvHandler ExtHandler Gateway:None Jul 15 05:17:47.508819 waagent[1939]: 2025-07-15T05:17:47.508794Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 15 05:17:47.508937 waagent[1939]: 2025-07-15T05:17:47.508916Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 15 05:17:47.509014 waagent[1939]: 2025-07-15T05:17:47.508999Z INFO EnvHandler ExtHandler Routes:None Jul 15 05:17:47.509447 waagent[1939]: 2025-07-15T05:17:47.509422Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 15 05:17:47.509503 waagent[1939]: 2025-07-15T05:17:47.509486Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 15 05:17:47.509615 waagent[1939]: 2025-07-15T05:17:47.509580Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 15 05:17:47.516223 waagent[1939]: 2025-07-15T05:17:47.516195Z INFO ExtHandler ExtHandler Jul 15 05:17:47.516283 waagent[1939]: 2025-07-15T05:17:47.516245Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b01ef4cd-007b-43e6-b591-df5f947cadb6 correlation c7c81b52-d36b-48b7-b7cb-5436769a5c51 created: 2025-07-15T05:16:53.171776Z] Jul 15 05:17:47.516495 waagent[1939]: 2025-07-15T05:17:47.516474Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 15 05:17:47.516859 waagent[1939]: 2025-07-15T05:17:47.516840Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 15 05:17:47.548399 waagent[1939]: 2025-07-15T05:17:47.548124Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 15 05:17:47.548399 waagent[1939]: Try `iptables -h' or 'iptables --help' for more information.) Jul 15 05:17:47.548542 waagent[1939]: 2025-07-15T05:17:47.548520Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: FE4A33F7-38D5-4B8E-A425-F4D55A47C23C;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 15 05:17:47.578215 waagent[1939]: 2025-07-15T05:17:47.578126Z INFO MonitorHandler ExtHandler Network interfaces: Jul 15 05:17:47.578215 waagent[1939]: Executing ['ip', '-a', '-o', 'link']: Jul 15 05:17:47.578215 waagent[1939]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 15 05:17:47.578215 waagent[1939]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:47:86:dd brd ff:ff:ff:ff:ff:ff\ alias Network Device Jul 15 05:17:47.578215 waagent[1939]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:47:86:dd brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jul 15 05:17:47.578215 waagent[1939]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 15 05:17:47.578215 waagent[1939]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 15 05:17:47.578215 waagent[1939]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 15 05:17:47.578215 waagent[1939]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 15 05:17:47.578215 waagent[1939]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 15 05:17:47.578215 waagent[1939]: 2: eth0 inet6 fe80::7eed:8dff:fe47:86dd/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 15 05:17:47.578215 waagent[1939]: 3: enP30832s1 inet6 fe80::7eed:8dff:fe47:86dd/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 15 05:17:47.604617 waagent[1939]: 2025-07-15T05:17:47.604577Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 15 05:17:47.604617 waagent[1939]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 05:17:47.604617 waagent[1939]: pkts bytes target prot opt in out source destination Jul 15 05:17:47.604617 waagent[1939]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 15 05:17:47.604617 waagent[1939]: pkts bytes target prot opt in out source destination Jul 15 05:17:47.604617 waagent[1939]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 05:17:47.604617 waagent[1939]: pkts bytes target prot opt in out source destination Jul 15 05:17:47.604617 waagent[1939]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 15 05:17:47.604617 waagent[1939]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 15 05:17:47.604617 waagent[1939]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 15 05:17:47.607462 waagent[1939]: 2025-07-15T05:17:47.607420Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 15 05:17:47.607462 waagent[1939]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 05:17:47.607462 waagent[1939]: pkts bytes target prot opt in out source destination Jul 15 05:17:47.607462 waagent[1939]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 15 05:17:47.607462 waagent[1939]: pkts bytes target prot opt in out source destination Jul 15 05:17:47.607462 waagent[1939]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 05:17:47.607462 waagent[1939]: pkts bytes target prot opt in out source destination Jul 15 05:17:47.607462 waagent[1939]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 15 05:17:47.607462 waagent[1939]: 5 468 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 15 05:17:47.607462 waagent[1939]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 15 05:17:55.219491 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 05:17:55.221130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:17:55.735509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:17:55.740928 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:17:55.772200 kubelet[2090]: E0715 05:17:55.772172 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:17:55.774522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:17:55.774644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:17:55.774937 systemd[1]: kubelet.service: Consumed 121ms CPU time, 109.5M memory peak. Jul 15 05:18:05.907420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 05:18:05.909111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:18:06.391485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:18:06.398953 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:18:06.428348 kubelet[2105]: E0715 05:18:06.428317 2105 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:18:06.429677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:18:06.429817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:18:06.430166 systemd[1]: kubelet.service: Consumed 116ms CPU time, 107.8M memory peak. Jul 15 05:18:07.144391 chronyd[1734]: Selected source PHC0 Jul 15 05:18:10.121784 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 05:18:10.122852 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.16.10:44388.service - OpenSSH per-connection server daemon (10.200.16.10:44388). Jul 15 05:18:10.849989 sshd[2113]: Accepted publickey for core from 10.200.16.10 port 44388 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:18:10.851162 sshd-session[2113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:10.855511 systemd-logind[1724]: New session 3 of user core. Jul 15 05:18:10.862867 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 05:18:11.395680 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.16.10:34256.service - OpenSSH per-connection server daemon (10.200.16.10:34256). Jul 15 05:18:12.018456 sshd[2119]: Accepted publickey for core from 10.200.16.10 port 34256 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:18:12.019649 sshd-session[2119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:12.023589 systemd-logind[1724]: New session 4 of user core. Jul 15 05:18:12.032835 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 05:18:12.459053 sshd[2122]: Connection closed by 10.200.16.10 port 34256 Jul 15 05:18:12.459559 sshd-session[2119]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:12.462559 systemd[1]: sshd@1-10.200.8.39:22-10.200.16.10:34256.service: Deactivated successfully. Jul 15 05:18:12.463912 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 05:18:12.465294 systemd-logind[1724]: Session 4 logged out. Waiting for processes to exit. Jul 15 05:18:12.465900 systemd-logind[1724]: Removed session 4. Jul 15 05:18:12.568721 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.16.10:34268.service - OpenSSH per-connection server daemon (10.200.16.10:34268). Jul 15 05:18:13.200777 sshd[2128]: Accepted publickey for core from 10.200.16.10 port 34268 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:18:13.201905 sshd-session[2128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:13.205963 systemd-logind[1724]: New session 5 of user core. Jul 15 05:18:13.210839 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 05:18:13.641476 sshd[2131]: Connection closed by 10.200.16.10 port 34268 Jul 15 05:18:13.642148 sshd-session[2128]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:13.644843 systemd[1]: sshd@2-10.200.8.39:22-10.200.16.10:34268.service: Deactivated successfully. Jul 15 05:18:13.646154 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 05:18:13.647598 systemd-logind[1724]: Session 5 logged out. Waiting for processes to exit. Jul 15 05:18:13.648239 systemd-logind[1724]: Removed session 5. Jul 15 05:18:13.771623 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.16.10:34284.service - OpenSSH per-connection server daemon (10.200.16.10:34284). Jul 15 05:18:14.396695 sshd[2137]: Accepted publickey for core from 10.200.16.10 port 34284 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:18:14.397809 sshd-session[2137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:14.401656 systemd-logind[1724]: New session 6 of user core. Jul 15 05:18:14.407871 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 05:18:14.837581 sshd[2140]: Connection closed by 10.200.16.10 port 34284 Jul 15 05:18:14.838070 sshd-session[2137]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:14.841268 systemd[1]: sshd@3-10.200.8.39:22-10.200.16.10:34284.service: Deactivated successfully. Jul 15 05:18:14.842409 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 05:18:14.843033 systemd-logind[1724]: Session 6 logged out. Waiting for processes to exit. Jul 15 05:18:14.843861 systemd-logind[1724]: Removed session 6. Jul 15 05:18:14.953666 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.16.10:34292.service - OpenSSH per-connection server daemon (10.200.16.10:34292). Jul 15 05:18:15.577524 sshd[2146]: Accepted publickey for core from 10.200.16.10 port 34292 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:18:15.578568 sshd-session[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:15.582683 systemd-logind[1724]: New session 7 of user core. Jul 15 05:18:15.591846 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 05:18:16.002804 sudo[2150]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 05:18:16.002996 sudo[2150]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:18:16.015276 sudo[2150]: pam_unix(sudo:session): session closed for user root Jul 15 05:18:16.115416 sshd[2149]: Connection closed by 10.200.16.10 port 34292 Jul 15 05:18:16.116040 sshd-session[2146]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:16.118986 systemd[1]: sshd@4-10.200.8.39:22-10.200.16.10:34292.service: Deactivated successfully. Jul 15 05:18:16.120338 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 05:18:16.121670 systemd-logind[1724]: Session 7 logged out. Waiting for processes to exit. Jul 15 05:18:16.122341 systemd-logind[1724]: Removed session 7. Jul 15 05:18:16.239600 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.16.10:34296.service - OpenSSH per-connection server daemon (10.200.16.10:34296). Jul 15 05:18:16.657497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 15 05:18:16.659641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:18:16.863938 sshd[2156]: Accepted publickey for core from 10.200.16.10 port 34296 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:18:16.864782 sshd-session[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:16.868257 systemd-logind[1724]: New session 8 of user core. Jul 15 05:18:16.872887 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 05:18:17.126656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:18:17.132923 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:18:17.162926 kubelet[2168]: E0715 05:18:17.162898 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:18:17.164141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:18:17.164256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:18:17.164580 systemd[1]: kubelet.service: Consumed 110ms CPU time, 108.6M memory peak. Jul 15 05:18:17.204010 sudo[2176]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 05:18:17.204199 sudo[2176]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:18:17.219106 sudo[2176]: pam_unix(sudo:session): session closed for user root Jul 15 05:18:17.222313 sudo[2175]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 05:18:17.222493 sudo[2175]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:18:17.228762 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:18:17.257091 augenrules[2198]: No rules Jul 15 05:18:17.257872 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:18:17.258049 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:18:17.258711 sudo[2175]: pam_unix(sudo:session): session closed for user root Jul 15 05:18:17.358053 sshd[2162]: Connection closed by 10.200.16.10 port 34296 Jul 15 05:18:17.358449 sshd-session[2156]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:17.361194 systemd[1]: sshd@5-10.200.8.39:22-10.200.16.10:34296.service: Deactivated successfully. Jul 15 05:18:17.362445 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 05:18:17.363711 systemd-logind[1724]: Session 8 logged out. Waiting for processes to exit. Jul 15 05:18:17.364312 systemd-logind[1724]: Removed session 8. Jul 15 05:18:17.498801 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.16.10:34312.service - OpenSSH per-connection server daemon (10.200.16.10:34312). Jul 15 05:18:18.129546 sshd[2207]: Accepted publickey for core from 10.200.16.10 port 34312 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:18:18.130720 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:18.134870 systemd-logind[1724]: New session 9 of user core. Jul 15 05:18:18.140865 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 05:18:18.472032 sudo[2211]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 05:18:18.472225 sudo[2211]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:18:19.633341 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 05:18:19.646082 (dockerd)[2230]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 05:18:20.187574 dockerd[2230]: time="2025-07-15T05:18:20.187368399Z" level=info msg="Starting up" Jul 15 05:18:20.190091 dockerd[2230]: time="2025-07-15T05:18:20.190059962Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 05:18:20.198158 dockerd[2230]: time="2025-07-15T05:18:20.198127038Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 05:18:20.333068 dockerd[2230]: time="2025-07-15T05:18:20.332942022Z" level=info msg="Loading containers: start." Jul 15 05:18:20.357755 kernel: Initializing XFRM netlink socket Jul 15 05:18:20.719059 systemd-networkd[1374]: docker0: Link UP Jul 15 05:18:20.731445 dockerd[2230]: time="2025-07-15T05:18:20.731417170Z" level=info msg="Loading containers: done." Jul 15 05:18:20.754402 dockerd[2230]: time="2025-07-15T05:18:20.754378578Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 05:18:20.754490 dockerd[2230]: time="2025-07-15T05:18:20.754430522Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 05:18:20.754528 dockerd[2230]: time="2025-07-15T05:18:20.754489416Z" level=info msg="Initializing buildkit" Jul 15 05:18:20.798847 dockerd[2230]: time="2025-07-15T05:18:20.798808242Z" level=info msg="Completed buildkit initialization" Jul 15 05:18:20.803589 dockerd[2230]: time="2025-07-15T05:18:20.803550095Z" level=info msg="Daemon has completed initialization" Jul 15 05:18:20.803796 dockerd[2230]: time="2025-07-15T05:18:20.803606496Z" level=info msg="API listen on /run/docker.sock" Jul 15 05:18:20.803713 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 05:18:22.005921 containerd[1750]: time="2025-07-15T05:18:22.005881215Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 15 05:18:22.655685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2349592.mount: Deactivated successfully. Jul 15 05:18:23.590784 containerd[1750]: time="2025-07-15T05:18:23.590747604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:23.592928 containerd[1750]: time="2025-07-15T05:18:23.592898735Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jul 15 05:18:23.595785 containerd[1750]: time="2025-07-15T05:18:23.595753460Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:23.599591 containerd[1750]: time="2025-07-15T05:18:23.599545675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:23.600194 containerd[1750]: time="2025-07-15T05:18:23.600038568Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.59412336s" Jul 15 05:18:23.600194 containerd[1750]: time="2025-07-15T05:18:23.600066418Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 15 05:18:23.600606 containerd[1750]: time="2025-07-15T05:18:23.600589775Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 15 05:18:24.730878 containerd[1750]: time="2025-07-15T05:18:24.730841569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:24.733150 containerd[1750]: time="2025-07-15T05:18:24.733120832Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jul 15 05:18:24.735776 containerd[1750]: time="2025-07-15T05:18:24.735742967Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:24.746847 containerd[1750]: time="2025-07-15T05:18:24.746818283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:24.747538 containerd[1750]: time="2025-07-15T05:18:24.747424856Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.146764855s" Jul 15 05:18:24.747538 containerd[1750]: time="2025-07-15T05:18:24.747449580Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 15 05:18:24.747905 containerd[1750]: time="2025-07-15T05:18:24.747889524Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 15 05:18:25.735339 containerd[1750]: time="2025-07-15T05:18:25.735307201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:25.737704 containerd[1750]: time="2025-07-15T05:18:25.737674076Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jul 15 05:18:25.740566 containerd[1750]: time="2025-07-15T05:18:25.740530694Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:25.744917 containerd[1750]: time="2025-07-15T05:18:25.744893409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:25.745531 containerd[1750]: time="2025-07-15T05:18:25.745427991Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 997.516887ms" Jul 15 05:18:25.745531 containerd[1750]: time="2025-07-15T05:18:25.745453589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 15 05:18:25.745938 containerd[1750]: time="2025-07-15T05:18:25.745925305Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 15 05:18:26.643930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831414089.mount: Deactivated successfully. Jul 15 05:18:26.971605 containerd[1750]: time="2025-07-15T05:18:26.971572634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:26.973938 containerd[1750]: time="2025-07-15T05:18:26.973917744Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jul 15 05:18:26.977309 containerd[1750]: time="2025-07-15T05:18:26.977290867Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:26.980802 containerd[1750]: time="2025-07-15T05:18:26.980782044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:26.981213 containerd[1750]: time="2025-07-15T05:18:26.981080871Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.235108877s" Jul 15 05:18:26.981213 containerd[1750]: time="2025-07-15T05:18:26.981109608Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 15 05:18:26.981607 containerd[1750]: time="2025-07-15T05:18:26.981578926Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 05:18:27.407241 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 15 05:18:27.408399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:18:27.845573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:18:27.850930 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:18:27.881224 kubelet[2515]: E0715 05:18:27.881191 2515 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:18:27.882531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:18:27.882655 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:18:27.882954 systemd[1]: kubelet.service: Consumed 115ms CPU time, 108.5M memory peak. Jul 15 05:18:27.907082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531658416.mount: Deactivated successfully. Jul 15 05:18:28.089750 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 15 05:18:28.446812 update_engine[1725]: I20250715 05:18:28.446774 1725 update_attempter.cc:509] Updating boot flags... Jul 15 05:18:28.854332 containerd[1750]: time="2025-07-15T05:18:28.854257668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:28.856428 containerd[1750]: time="2025-07-15T05:18:28.856395210Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 15 05:18:28.859613 containerd[1750]: time="2025-07-15T05:18:28.859576527Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:28.863248 containerd[1750]: time="2025-07-15T05:18:28.863203588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:28.863925 containerd[1750]: time="2025-07-15T05:18:28.863811004Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.88219805s" Jul 15 05:18:28.863925 containerd[1750]: time="2025-07-15T05:18:28.863839638Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 05:18:28.864377 containerd[1750]: time="2025-07-15T05:18:28.864272268Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 05:18:29.412758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1242920791.mount: Deactivated successfully. Jul 15 05:18:29.429982 containerd[1750]: time="2025-07-15T05:18:29.429956946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:18:29.432296 containerd[1750]: time="2025-07-15T05:18:29.432268407Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 15 05:18:29.434745 containerd[1750]: time="2025-07-15T05:18:29.434703039Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:18:29.438346 containerd[1750]: time="2025-07-15T05:18:29.438303484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:18:29.438752 containerd[1750]: time="2025-07-15T05:18:29.438645513Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 574.35219ms" Jul 15 05:18:29.438752 containerd[1750]: time="2025-07-15T05:18:29.438668530Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 05:18:29.439134 containerd[1750]: time="2025-07-15T05:18:29.439111514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 05:18:30.048004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131423552.mount: Deactivated successfully. Jul 15 05:18:31.551949 containerd[1750]: time="2025-07-15T05:18:31.551838469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:31.554299 containerd[1750]: time="2025-07-15T05:18:31.554274563Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jul 15 05:18:31.557226 containerd[1750]: time="2025-07-15T05:18:31.557192016Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:31.560752 containerd[1750]: time="2025-07-15T05:18:31.560698015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:31.561408 containerd[1750]: time="2025-07-15T05:18:31.561288227Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.122154162s" Jul 15 05:18:31.561408 containerd[1750]: time="2025-07-15T05:18:31.561313289Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 15 05:18:33.787217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:18:33.787640 systemd[1]: kubelet.service: Consumed 115ms CPU time, 108.5M memory peak. Jul 15 05:18:33.789357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:18:33.809654 systemd[1]: Reload requested from client PID 2693 ('systemctl') (unit session-9.scope)... Jul 15 05:18:33.809664 systemd[1]: Reloading... Jul 15 05:18:33.883785 zram_generator::config[2739]: No configuration found. Jul 15 05:18:33.990541 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:18:34.071050 systemd[1]: Reloading finished in 261 ms. Jul 15 05:18:34.114668 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 05:18:34.114754 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 05:18:34.114995 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:18:34.115044 systemd[1]: kubelet.service: Consumed 61ms CPU time, 69.9M memory peak. Jul 15 05:18:34.116174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:18:34.731697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:18:34.736965 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:18:34.768699 kubelet[2806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:18:34.768699 kubelet[2806]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:18:34.768699 kubelet[2806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:18:34.768936 kubelet[2806]: I0715 05:18:34.768747 2806 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:18:35.128729 kubelet[2806]: I0715 05:18:35.128665 2806 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 05:18:35.128729 kubelet[2806]: I0715 05:18:35.128683 2806 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:18:35.129069 kubelet[2806]: I0715 05:18:35.129049 2806 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 05:18:35.155040 kubelet[2806]: E0715 05:18:35.155003 2806 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:18:35.155820 kubelet[2806]: I0715 05:18:35.155799 2806 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:18:35.160952 kubelet[2806]: I0715 05:18:35.160937 2806 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:18:35.164803 kubelet[2806]: I0715 05:18:35.164712 2806 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:18:35.164914 kubelet[2806]: I0715 05:18:35.164893 2806 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:18:35.165083 kubelet[2806]: I0715 05:18:35.164917 2806 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4396.0.0-n-11ebebb5c9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:18:35.165641 kubelet[2806]: I0715 05:18:35.165627 2806 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:18:35.165668 kubelet[2806]: I0715 05:18:35.165642 2806 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 05:18:35.165764 kubelet[2806]: I0715 05:18:35.165754 2806 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:18:35.169180 kubelet[2806]: I0715 05:18:35.169163 2806 kubelet.go:446] "Attempting to sync node with API server" Jul 15 05:18:35.169227 kubelet[2806]: I0715 05:18:35.169187 2806 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:18:35.169227 kubelet[2806]: I0715 05:18:35.169206 2806 kubelet.go:352] "Adding apiserver pod source" Jul 15 05:18:35.169227 kubelet[2806]: I0715 05:18:35.169216 2806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:18:35.176022 kubelet[2806]: W0715 05:18:35.175368 2806 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-11ebebb5c9&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 15 05:18:35.176022 kubelet[2806]: E0715 05:18:35.175420 2806 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-11ebebb5c9&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:18:35.176022 kubelet[2806]: W0715 05:18:35.175479 2806 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 15 05:18:35.176022 kubelet[2806]: E0715 05:18:35.175504 2806 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:18:35.176022 kubelet[2806]: I0715 05:18:35.175569 2806 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:18:35.176022 kubelet[2806]: I0715 05:18:35.175888 2806 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:18:35.176022 kubelet[2806]: W0715 05:18:35.175928 2806 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 05:18:35.177904 kubelet[2806]: I0715 05:18:35.177885 2806 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:18:35.177970 kubelet[2806]: I0715 05:18:35.177915 2806 server.go:1287] "Started kubelet" Jul 15 05:18:35.178059 kubelet[2806]: I0715 05:18:35.178018 2806 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:18:35.178866 kubelet[2806]: I0715 05:18:35.178761 2806 server.go:479] "Adding debug handlers to kubelet server" Jul 15 05:18:35.180700 kubelet[2806]: I0715 05:18:35.180465 2806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:18:35.182162 kubelet[2806]: I0715 05:18:35.181789 2806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:18:35.182162 kubelet[2806]: I0715 05:18:35.181950 2806 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:18:35.183401 kubelet[2806]: E0715 05:18:35.182074 2806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4396.0.0-n-11ebebb5c9.1852550c08e77128 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4396.0.0-n-11ebebb5c9,UID:ci-4396.0.0-n-11ebebb5c9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4396.0.0-n-11ebebb5c9,},FirstTimestamp:2025-07-15 05:18:35.17789828 +0000 UTC m=+0.437913052,LastTimestamp:2025-07-15 05:18:35.17789828 +0000 UTC m=+0.437913052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4396.0.0-n-11ebebb5c9,}" Jul 15 05:18:35.185757 kubelet[2806]: I0715 05:18:35.185213 2806 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:18:35.185757 kubelet[2806]: E0715 05:18:35.185636 2806 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-11ebebb5c9\" not found" Jul 15 05:18:35.186696 kubelet[2806]: I0715 05:18:35.186680 2806 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:18:35.187032 kubelet[2806]: I0715 05:18:35.187021 2806 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:18:35.187111 kubelet[2806]: I0715 05:18:35.187105 2806 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:18:35.187454 kubelet[2806]: W0715 05:18:35.187429 2806 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 15 05:18:35.187525 kubelet[2806]: E0715 05:18:35.187515 2806 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:18:35.187619 kubelet[2806]: E0715 05:18:35.187601 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-11ebebb5c9?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="200ms" Jul 15 05:18:35.187814 kubelet[2806]: I0715 05:18:35.187806 2806 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:18:35.187925 kubelet[2806]: I0715 05:18:35.187916 2806 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:18:35.189099 kubelet[2806]: I0715 05:18:35.189088 2806 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:18:35.192681 kubelet[2806]: E0715 05:18:35.192663 2806 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:18:35.210616 kubelet[2806]: I0715 05:18:35.210601 2806 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:18:35.210616 kubelet[2806]: I0715 05:18:35.210614 2806 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:18:35.210616 kubelet[2806]: I0715 05:18:35.210627 2806 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:18:35.217377 kubelet[2806]: I0715 05:18:35.217362 2806 policy_none.go:49] "None policy: Start" Jul 15 05:18:35.217377 kubelet[2806]: I0715 05:18:35.217376 2806 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:18:35.217445 kubelet[2806]: I0715 05:18:35.217385 2806 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:18:35.223954 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 05:18:35.227318 kubelet[2806]: I0715 05:18:35.227243 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:18:35.228036 kubelet[2806]: I0715 05:18:35.228022 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:18:35.228233 kubelet[2806]: I0715 05:18:35.228092 2806 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 05:18:35.228233 kubelet[2806]: I0715 05:18:35.228105 2806 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:18:35.228233 kubelet[2806]: I0715 05:18:35.228109 2806 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 05:18:35.228233 kubelet[2806]: E0715 05:18:35.228133 2806 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:18:35.231012 kubelet[2806]: W0715 05:18:35.230994 2806 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 15 05:18:35.231106 kubelet[2806]: E0715 05:18:35.231092 2806 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:18:35.233701 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 05:18:35.240300 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 05:18:35.241225 kubelet[2806]: I0715 05:18:35.241208 2806 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:18:35.241337 kubelet[2806]: I0715 05:18:35.241326 2806 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:18:35.241362 kubelet[2806]: I0715 05:18:35.241336 2806 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:18:35.241796 kubelet[2806]: I0715 05:18:35.241671 2806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:18:35.242722 kubelet[2806]: E0715 05:18:35.242678 2806 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:18:35.242722 kubelet[2806]: E0715 05:18:35.242707 2806 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4396.0.0-n-11ebebb5c9\" not found" Jul 15 05:18:35.335561 systemd[1]: Created slice kubepods-burstable-pod9fc644b8ef417f4e44dd775b7436a7c0.slice - libcontainer container kubepods-burstable-pod9fc644b8ef417f4e44dd775b7436a7c0.slice. Jul 15 05:18:35.342443 kubelet[2806]: I0715 05:18:35.342400 2806 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.342688 kubelet[2806]: E0715 05:18:35.342660 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.348680 kubelet[2806]: E0715 05:18:35.348667 2806 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-11ebebb5c9\" not found" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.350142 systemd[1]: Created slice kubepods-burstable-pod07f418e9b10c0d075785b8353fbc95af.slice - libcontainer container kubepods-burstable-pod07f418e9b10c0d075785b8353fbc95af.slice. Jul 15 05:18:35.359592 kubelet[2806]: E0715 05:18:35.359468 2806 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-11ebebb5c9\" not found" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.361438 systemd[1]: Created slice kubepods-burstable-pod4e2bae4d0c08a4e2a45e764550f57c71.slice - libcontainer container kubepods-burstable-pod4e2bae4d0c08a4e2a45e764550f57c71.slice. Jul 15 05:18:35.362688 kubelet[2806]: E0715 05:18:35.362667 2806 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-11ebebb5c9\" not found" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.388052 kubelet[2806]: I0715 05:18:35.387945 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fc644b8ef417f4e44dd775b7436a7c0-ca-certs\") pod \"kube-apiserver-ci-4396.0.0-n-11ebebb5c9\" (UID: \"9fc644b8ef417f4e44dd775b7436a7c0\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.388052 kubelet[2806]: E0715 05:18:35.387959 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-11ebebb5c9?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="400ms" Jul 15 05:18:35.388052 kubelet[2806]: I0715 05:18:35.387969 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fc644b8ef417f4e44dd775b7436a7c0-k8s-certs\") pod \"kube-apiserver-ci-4396.0.0-n-11ebebb5c9\" (UID: \"9fc644b8ef417f4e44dd775b7436a7c0\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.388052 kubelet[2806]: I0715 05:18:35.387983 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/07f418e9b10c0d075785b8353fbc95af-flexvolume-dir\") pod \"kube-controller-manager-ci-4396.0.0-n-11ebebb5c9\" (UID: \"07f418e9b10c0d075785b8353fbc95af\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.388052 kubelet[2806]: I0715 05:18:35.387996 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07f418e9b10c0d075785b8353fbc95af-k8s-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-11ebebb5c9\" (UID: \"07f418e9b10c0d075785b8353fbc95af\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.388219 kubelet[2806]: I0715 05:18:35.388010 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07f418e9b10c0d075785b8353fbc95af-kubeconfig\") pod \"kube-controller-manager-ci-4396.0.0-n-11ebebb5c9\" (UID: \"07f418e9b10c0d075785b8353fbc95af\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.388219 kubelet[2806]: I0715 05:18:35.388023 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07f418e9b10c0d075785b8353fbc95af-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4396.0.0-n-11ebebb5c9\" (UID: \"07f418e9b10c0d075785b8353fbc95af\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.388219 kubelet[2806]: I0715 05:18:35.388037 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fc644b8ef417f4e44dd775b7436a7c0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4396.0.0-n-11ebebb5c9\" (UID: \"9fc644b8ef417f4e44dd775b7436a7c0\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.388219 kubelet[2806]: I0715 05:18:35.388051 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07f418e9b10c0d075785b8353fbc95af-ca-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-11ebebb5c9\" (UID: \"07f418e9b10c0d075785b8353fbc95af\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.388219 kubelet[2806]: I0715 05:18:35.388067 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e2bae4d0c08a4e2a45e764550f57c71-kubeconfig\") pod \"kube-scheduler-ci-4396.0.0-n-11ebebb5c9\" (UID: \"4e2bae4d0c08a4e2a45e764550f57c71\") " pod="kube-system/kube-scheduler-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.545035 kubelet[2806]: I0715 05:18:35.544983 2806 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.545286 kubelet[2806]: E0715 05:18:35.545261 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.650619 containerd[1750]: time="2025-07-15T05:18:35.650528368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4396.0.0-n-11ebebb5c9,Uid:9fc644b8ef417f4e44dd775b7436a7c0,Namespace:kube-system,Attempt:0,}" Jul 15 05:18:35.661040 containerd[1750]: time="2025-07-15T05:18:35.661014034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4396.0.0-n-11ebebb5c9,Uid:07f418e9b10c0d075785b8353fbc95af,Namespace:kube-system,Attempt:0,}" Jul 15 05:18:35.665324 containerd[1750]: time="2025-07-15T05:18:35.665291914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4396.0.0-n-11ebebb5c9,Uid:4e2bae4d0c08a4e2a45e764550f57c71,Namespace:kube-system,Attempt:0,}" Jul 15 05:18:35.723371 containerd[1750]: time="2025-07-15T05:18:35.722979765Z" level=info msg="connecting to shim 31bf01158154eefe0fa5c3e93e669adf22bdc0ed95ac13070320cde4e8ed70e0" address="unix:///run/containerd/s/69ee139ac1b7f85fd8b1a0816ba607abe99eb70db2ee8b1163dc92df18f80dad" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:18:35.744989 systemd[1]: Started cri-containerd-31bf01158154eefe0fa5c3e93e669adf22bdc0ed95ac13070320cde4e8ed70e0.scope - libcontainer container 31bf01158154eefe0fa5c3e93e669adf22bdc0ed95ac13070320cde4e8ed70e0. Jul 15 05:18:35.750023 containerd[1750]: time="2025-07-15T05:18:35.750000313Z" level=info msg="connecting to shim 3ffd6e99030f2255d74b8301e48707e21fd92543fe8deb5ced1459bc871e88a3" address="unix:///run/containerd/s/f2e44486b73e9ec98e94903e539ba1b718623844ff37734e02cce1a1a61cbff0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:18:35.772038 containerd[1750]: time="2025-07-15T05:18:35.772007976Z" level=info msg="connecting to shim 25060829e5f967623c1152586b317025b3fde6d955ac1bdafa848107ea53226d" address="unix:///run/containerd/s/285bbbb5f666566ab76e6e45a95bcca4ee4c4d56fb5c839fa8c43226df940257" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:18:35.777911 systemd[1]: Started cri-containerd-3ffd6e99030f2255d74b8301e48707e21fd92543fe8deb5ced1459bc871e88a3.scope - libcontainer container 3ffd6e99030f2255d74b8301e48707e21fd92543fe8deb5ced1459bc871e88a3. Jul 15 05:18:35.788890 kubelet[2806]: E0715 05:18:35.788863 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-11ebebb5c9?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="800ms" Jul 15 05:18:35.798011 systemd[1]: Started cri-containerd-25060829e5f967623c1152586b317025b3fde6d955ac1bdafa848107ea53226d.scope - libcontainer container 25060829e5f967623c1152586b317025b3fde6d955ac1bdafa848107ea53226d. Jul 15 05:18:35.815057 containerd[1750]: time="2025-07-15T05:18:35.815027873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4396.0.0-n-11ebebb5c9,Uid:9fc644b8ef417f4e44dd775b7436a7c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"31bf01158154eefe0fa5c3e93e669adf22bdc0ed95ac13070320cde4e8ed70e0\"" Jul 15 05:18:35.818477 containerd[1750]: time="2025-07-15T05:18:35.818399611Z" level=info msg="CreateContainer within sandbox \"31bf01158154eefe0fa5c3e93e669adf22bdc0ed95ac13070320cde4e8ed70e0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 05:18:35.838437 containerd[1750]: time="2025-07-15T05:18:35.838345218Z" level=info msg="Container f2b4d6a9870aaf945773c9b0e7f0a7c3a3404254c2153d81f2350512ca96378d: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:18:35.842309 containerd[1750]: time="2025-07-15T05:18:35.842277389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4396.0.0-n-11ebebb5c9,Uid:07f418e9b10c0d075785b8353fbc95af,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ffd6e99030f2255d74b8301e48707e21fd92543fe8deb5ced1459bc871e88a3\"" Jul 15 05:18:35.844657 containerd[1750]: time="2025-07-15T05:18:35.844282697Z" level=info msg="CreateContainer within sandbox \"3ffd6e99030f2255d74b8301e48707e21fd92543fe8deb5ced1459bc871e88a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 05:18:35.859752 containerd[1750]: time="2025-07-15T05:18:35.859715404Z" level=info msg="CreateContainer within sandbox \"31bf01158154eefe0fa5c3e93e669adf22bdc0ed95ac13070320cde4e8ed70e0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f2b4d6a9870aaf945773c9b0e7f0a7c3a3404254c2153d81f2350512ca96378d\"" Jul 15 05:18:35.860135 containerd[1750]: time="2025-07-15T05:18:35.860073827Z" level=info msg="StartContainer for \"f2b4d6a9870aaf945773c9b0e7f0a7c3a3404254c2153d81f2350512ca96378d\"" Jul 15 05:18:35.860826 containerd[1750]: time="2025-07-15T05:18:35.860797941Z" level=info msg="connecting to shim f2b4d6a9870aaf945773c9b0e7f0a7c3a3404254c2153d81f2350512ca96378d" address="unix:///run/containerd/s/69ee139ac1b7f85fd8b1a0816ba607abe99eb70db2ee8b1163dc92df18f80dad" protocol=ttrpc version=3 Jul 15 05:18:35.863012 containerd[1750]: time="2025-07-15T05:18:35.862988929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4396.0.0-n-11ebebb5c9,Uid:4e2bae4d0c08a4e2a45e764550f57c71,Namespace:kube-system,Attempt:0,} returns sandbox id \"25060829e5f967623c1152586b317025b3fde6d955ac1bdafa848107ea53226d\"" Jul 15 05:18:35.865031 containerd[1750]: time="2025-07-15T05:18:35.864972755Z" level=info msg="CreateContainer within sandbox \"25060829e5f967623c1152586b317025b3fde6d955ac1bdafa848107ea53226d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 05:18:35.876901 systemd[1]: Started cri-containerd-f2b4d6a9870aaf945773c9b0e7f0a7c3a3404254c2153d81f2350512ca96378d.scope - libcontainer container f2b4d6a9870aaf945773c9b0e7f0a7c3a3404254c2153d81f2350512ca96378d. Jul 15 05:18:35.877456 containerd[1750]: time="2025-07-15T05:18:35.877434940Z" level=info msg="Container fff04733d93b6eb85bf8cd2e9d481d9c1276428ea381ce43a3d08def13f06180: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:18:35.893063 containerd[1750]: time="2025-07-15T05:18:35.893043815Z" level=info msg="CreateContainer within sandbox \"3ffd6e99030f2255d74b8301e48707e21fd92543fe8deb5ced1459bc871e88a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fff04733d93b6eb85bf8cd2e9d481d9c1276428ea381ce43a3d08def13f06180\"" Jul 15 05:18:35.893479 containerd[1750]: time="2025-07-15T05:18:35.893460359Z" level=info msg="StartContainer for \"fff04733d93b6eb85bf8cd2e9d481d9c1276428ea381ce43a3d08def13f06180\"" Jul 15 05:18:35.896448 containerd[1750]: time="2025-07-15T05:18:35.896420658Z" level=info msg="connecting to shim fff04733d93b6eb85bf8cd2e9d481d9c1276428ea381ce43a3d08def13f06180" address="unix:///run/containerd/s/f2e44486b73e9ec98e94903e539ba1b718623844ff37734e02cce1a1a61cbff0" protocol=ttrpc version=3 Jul 15 05:18:35.902353 containerd[1750]: time="2025-07-15T05:18:35.901964914Z" level=info msg="Container 08f5b60a135b25b6ef3b20cf1a68c0842616f24c0e30ab0f64e7aed63ab7a1c5: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:18:35.919948 systemd[1]: Started cri-containerd-fff04733d93b6eb85bf8cd2e9d481d9c1276428ea381ce43a3d08def13f06180.scope - libcontainer container fff04733d93b6eb85bf8cd2e9d481d9c1276428ea381ce43a3d08def13f06180. Jul 15 05:18:35.934235 containerd[1750]: time="2025-07-15T05:18:35.933124532Z" level=info msg="StartContainer for \"f2b4d6a9870aaf945773c9b0e7f0a7c3a3404254c2153d81f2350512ca96378d\" returns successfully" Jul 15 05:18:35.934235 containerd[1750]: time="2025-07-15T05:18:35.933258764Z" level=info msg="CreateContainer within sandbox \"25060829e5f967623c1152586b317025b3fde6d955ac1bdafa848107ea53226d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"08f5b60a135b25b6ef3b20cf1a68c0842616f24c0e30ab0f64e7aed63ab7a1c5\"" Jul 15 05:18:35.934506 containerd[1750]: time="2025-07-15T05:18:35.934491583Z" level=info msg="StartContainer for \"08f5b60a135b25b6ef3b20cf1a68c0842616f24c0e30ab0f64e7aed63ab7a1c5\"" Jul 15 05:18:35.935536 containerd[1750]: time="2025-07-15T05:18:35.935478920Z" level=info msg="connecting to shim 08f5b60a135b25b6ef3b20cf1a68c0842616f24c0e30ab0f64e7aed63ab7a1c5" address="unix:///run/containerd/s/285bbbb5f666566ab76e6e45a95bcca4ee4c4d56fb5c839fa8c43226df940257" protocol=ttrpc version=3 Jul 15 05:18:35.947019 kubelet[2806]: I0715 05:18:35.947003 2806 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.947241 kubelet[2806]: E0715 05:18:35.947224 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:35.962012 systemd[1]: Started cri-containerd-08f5b60a135b25b6ef3b20cf1a68c0842616f24c0e30ab0f64e7aed63ab7a1c5.scope - libcontainer container 08f5b60a135b25b6ef3b20cf1a68c0842616f24c0e30ab0f64e7aed63ab7a1c5. Jul 15 05:18:35.980859 containerd[1750]: time="2025-07-15T05:18:35.980835801Z" level=info msg="StartContainer for \"fff04733d93b6eb85bf8cd2e9d481d9c1276428ea381ce43a3d08def13f06180\" returns successfully" Jul 15 05:18:36.031602 containerd[1750]: time="2025-07-15T05:18:36.031533508Z" level=info msg="StartContainer for \"08f5b60a135b25b6ef3b20cf1a68c0842616f24c0e30ab0f64e7aed63ab7a1c5\" returns successfully" Jul 15 05:18:36.236225 kubelet[2806]: E0715 05:18:36.236203 2806 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-11ebebb5c9\" not found" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:36.238522 kubelet[2806]: E0715 05:18:36.238505 2806 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-11ebebb5c9\" not found" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:36.242199 kubelet[2806]: E0715 05:18:36.242183 2806 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-11ebebb5c9\" not found" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:36.749204 kubelet[2806]: I0715 05:18:36.749182 2806 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:37.245184 kubelet[2806]: E0715 05:18:37.245140 2806 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-11ebebb5c9\" not found" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:37.246031 kubelet[2806]: E0715 05:18:37.245719 2806 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-11ebebb5c9\" not found" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:37.550155 kubelet[2806]: E0715 05:18:37.550061 2806 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4396.0.0-n-11ebebb5c9\" not found" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:37.899695 kubelet[2806]: I0715 05:18:37.899121 2806 kubelet_node_status.go:78] "Successfully registered node" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:37.899695 kubelet[2806]: E0715 05:18:37.899401 2806 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4396.0.0-n-11ebebb5c9\": node \"ci-4396.0.0-n-11ebebb5c9\" not found" Jul 15 05:18:37.987071 kubelet[2806]: I0715 05:18:37.987051 2806 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:37.990806 kubelet[2806]: E0715 05:18:37.990781 2806 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4396.0.0-n-11ebebb5c9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:37.990806 kubelet[2806]: I0715 05:18:37.990803 2806 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:37.992402 kubelet[2806]: E0715 05:18:37.992245 2806 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4396.0.0-n-11ebebb5c9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:37.992402 kubelet[2806]: I0715 05:18:37.992267 2806 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:37.994238 kubelet[2806]: E0715 05:18:37.994202 2806 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4396.0.0-n-11ebebb5c9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:38.173455 kubelet[2806]: I0715 05:18:38.173429 2806 apiserver.go:52] "Watching apiserver" Jul 15 05:18:38.187893 kubelet[2806]: I0715 05:18:38.187873 2806 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:18:42.757242 kubelet[2806]: I0715 05:18:42.757203 2806 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:42.761499 kubelet[2806]: W0715 05:18:42.761475 2806 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 05:18:43.200079 systemd[1]: Reload requested from client PID 3074 ('systemctl') (unit session-9.scope)... Jul 15 05:18:43.200091 systemd[1]: Reloading... Jul 15 05:18:43.269793 zram_generator::config[3117]: No configuration found. Jul 15 05:18:43.350412 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:18:43.462616 systemd[1]: Reloading finished in 262 ms. Jul 15 05:18:43.477972 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:18:43.489338 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 05:18:43.489538 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:18:43.489580 systemd[1]: kubelet.service: Consumed 743ms CPU time, 131.4M memory peak. Jul 15 05:18:43.490761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:18:43.958482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:18:43.962170 (kubelet)[3187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:18:43.996462 kubelet[3187]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:18:43.996845 kubelet[3187]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:18:43.996845 kubelet[3187]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:18:43.997078 kubelet[3187]: I0715 05:18:43.997050 3187 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:18:44.003749 kubelet[3187]: I0715 05:18:44.003382 3187 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 05:18:44.003749 kubelet[3187]: I0715 05:18:44.003400 3187 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:18:44.003749 kubelet[3187]: I0715 05:18:44.003605 3187 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 05:18:44.005099 kubelet[3187]: I0715 05:18:44.005077 3187 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 05:18:44.006804 kubelet[3187]: I0715 05:18:44.006786 3187 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:18:44.009585 kubelet[3187]: I0715 05:18:44.009571 3187 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:18:44.012135 kubelet[3187]: I0715 05:18:44.012073 3187 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:18:44.012413 kubelet[3187]: I0715 05:18:44.012394 3187 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:18:44.012780 kubelet[3187]: I0715 05:18:44.012458 3187 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4396.0.0-n-11ebebb5c9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:18:44.013004 kubelet[3187]: I0715 05:18:44.012934 3187 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:18:44.013004 kubelet[3187]: I0715 05:18:44.012947 3187 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 05:18:44.013087 kubelet[3187]: I0715 05:18:44.013081 3187 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:18:44.013257 kubelet[3187]: I0715 05:18:44.013251 3187 kubelet.go:446] "Attempting to sync node with API server" Jul 15 05:18:44.013257 kubelet[3187]: I0715 05:18:44.013292 3187 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:18:44.013257 kubelet[3187]: I0715 05:18:44.013313 3187 kubelet.go:352] "Adding apiserver pod source" Jul 15 05:18:44.013257 kubelet[3187]: I0715 05:18:44.013322 3187 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:18:44.016337 kubelet[3187]: I0715 05:18:44.015839 3187 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:18:44.018749 kubelet[3187]: I0715 05:18:44.017193 3187 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:18:44.021578 kubelet[3187]: I0715 05:18:44.021564 3187 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:18:44.021646 kubelet[3187]: I0715 05:18:44.021590 3187 server.go:1287] "Started kubelet" Jul 15 05:18:44.023393 kubelet[3187]: I0715 05:18:44.023072 3187 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:18:44.027425 kubelet[3187]: I0715 05:18:44.027392 3187 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:18:44.028347 kubelet[3187]: I0715 05:18:44.028180 3187 server.go:479] "Adding debug handlers to kubelet server" Jul 15 05:18:44.029229 kubelet[3187]: I0715 05:18:44.029028 3187 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:18:44.029229 kubelet[3187]: I0715 05:18:44.029185 3187 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:18:44.029399 kubelet[3187]: I0715 05:18:44.029322 3187 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:18:44.030904 kubelet[3187]: I0715 05:18:44.030434 3187 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:18:44.030904 kubelet[3187]: E0715 05:18:44.030581 3187 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-11ebebb5c9\" not found" Jul 15 05:18:44.033190 kubelet[3187]: I0715 05:18:44.031945 3187 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:18:44.033190 kubelet[3187]: I0715 05:18:44.032031 3187 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:18:44.033746 kubelet[3187]: I0715 05:18:44.033485 3187 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:18:44.035106 kubelet[3187]: I0715 05:18:44.034396 3187 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:18:44.035106 kubelet[3187]: I0715 05:18:44.034423 3187 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 05:18:44.035106 kubelet[3187]: I0715 05:18:44.034438 3187 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:18:44.035106 kubelet[3187]: I0715 05:18:44.034444 3187 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 05:18:44.035106 kubelet[3187]: E0715 05:18:44.034473 3187 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:18:44.044666 kubelet[3187]: I0715 05:18:44.044490 3187 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:18:44.047076 kubelet[3187]: E0715 05:18:44.045435 3187 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:18:44.047076 kubelet[3187]: I0715 05:18:44.046098 3187 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:18:44.047076 kubelet[3187]: I0715 05:18:44.046107 3187 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:18:44.086691 kubelet[3187]: I0715 05:18:44.086675 3187 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:18:44.086691 kubelet[3187]: I0715 05:18:44.086685 3187 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:18:44.086844 kubelet[3187]: I0715 05:18:44.086698 3187 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:18:44.086844 kubelet[3187]: I0715 05:18:44.086820 3187 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 05:18:44.086844 kubelet[3187]: I0715 05:18:44.086828 3187 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 05:18:44.086844 kubelet[3187]: I0715 05:18:44.086842 3187 policy_none.go:49] "None policy: Start" Jul 15 05:18:44.086844 kubelet[3187]: I0715 05:18:44.086849 3187 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:18:44.086962 kubelet[3187]: I0715 05:18:44.086856 3187 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:18:44.086962 kubelet[3187]: I0715 05:18:44.086936 3187 state_mem.go:75] "Updated machine memory state" Jul 15 05:18:44.089326 kubelet[3187]: I0715 05:18:44.089310 3187 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:18:44.089626 kubelet[3187]: I0715 05:18:44.089482 3187 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:18:44.089626 kubelet[3187]: I0715 05:18:44.089494 3187 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:18:44.089860 kubelet[3187]: I0715 05:18:44.089851 3187 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:18:44.091146 kubelet[3187]: E0715 05:18:44.091134 3187 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:18:44.135708 kubelet[3187]: I0715 05:18:44.135685 3187 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.135896 kubelet[3187]: I0715 05:18:44.135812 3187 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.135999 kubelet[3187]: I0715 05:18:44.135992 3187 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.145695 kubelet[3187]: W0715 05:18:44.145297 3187 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 05:18:44.145695 kubelet[3187]: W0715 05:18:44.145344 3187 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 05:18:44.145695 kubelet[3187]: W0715 05:18:44.145406 3187 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 05:18:44.145839 kubelet[3187]: E0715 05:18:44.145723 3187 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4396.0.0-n-11ebebb5c9\" already exists" pod="kube-system/kube-scheduler-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.192410 kubelet[3187]: I0715 05:18:44.192400 3187 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.202526 kubelet[3187]: I0715 05:18:44.201812 3187 kubelet_node_status.go:124] "Node was previously registered" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.202526 kubelet[3187]: I0715 05:18:44.201963 3187 kubelet_node_status.go:78] "Successfully registered node" node="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.333784 kubelet[3187]: I0715 05:18:44.333629 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fc644b8ef417f4e44dd775b7436a7c0-k8s-certs\") pod \"kube-apiserver-ci-4396.0.0-n-11ebebb5c9\" (UID: \"9fc644b8ef417f4e44dd775b7436a7c0\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.333784 kubelet[3187]: I0715 05:18:44.333659 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fc644b8ef417f4e44dd775b7436a7c0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4396.0.0-n-11ebebb5c9\" (UID: \"9fc644b8ef417f4e44dd775b7436a7c0\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.333784 kubelet[3187]: I0715 05:18:44.333679 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/07f418e9b10c0d075785b8353fbc95af-flexvolume-dir\") pod \"kube-controller-manager-ci-4396.0.0-n-11ebebb5c9\" (UID: \"07f418e9b10c0d075785b8353fbc95af\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.333784 kubelet[3187]: I0715 05:18:44.333696 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07f418e9b10c0d075785b8353fbc95af-kubeconfig\") pod \"kube-controller-manager-ci-4396.0.0-n-11ebebb5c9\" (UID: \"07f418e9b10c0d075785b8353fbc95af\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.333784 kubelet[3187]: I0715 05:18:44.333713 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07f418e9b10c0d075785b8353fbc95af-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4396.0.0-n-11ebebb5c9\" (UID: \"07f418e9b10c0d075785b8353fbc95af\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.334283 kubelet[3187]: I0715 05:18:44.333728 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e2bae4d0c08a4e2a45e764550f57c71-kubeconfig\") pod \"kube-scheduler-ci-4396.0.0-n-11ebebb5c9\" (UID: \"4e2bae4d0c08a4e2a45e764550f57c71\") " pod="kube-system/kube-scheduler-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.334283 kubelet[3187]: I0715 05:18:44.334245 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fc644b8ef417f4e44dd775b7436a7c0-ca-certs\") pod \"kube-apiserver-ci-4396.0.0-n-11ebebb5c9\" (UID: \"9fc644b8ef417f4e44dd775b7436a7c0\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.334283 kubelet[3187]: I0715 05:18:44.334266 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07f418e9b10c0d075785b8353fbc95af-k8s-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-11ebebb5c9\" (UID: \"07f418e9b10c0d075785b8353fbc95af\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:44.334283 kubelet[3187]: I0715 05:18:44.334281 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07f418e9b10c0d075785b8353fbc95af-ca-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-11ebebb5c9\" (UID: \"07f418e9b10c0d075785b8353fbc95af\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:45.021350 kubelet[3187]: I0715 05:18:45.021325 3187 apiserver.go:52] "Watching apiserver" Jul 15 05:18:45.032609 kubelet[3187]: I0715 05:18:45.032594 3187 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:18:45.073749 kubelet[3187]: I0715 05:18:45.073719 3187 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:45.074330 kubelet[3187]: I0715 05:18:45.074140 3187 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:45.081963 kubelet[3187]: W0715 05:18:45.081950 3187 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 05:18:45.082120 kubelet[3187]: E0715 05:18:45.082018 3187 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4396.0.0-n-11ebebb5c9\" already exists" pod="kube-system/kube-scheduler-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:45.082959 kubelet[3187]: W0715 05:18:45.082894 3187 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 05:18:45.082959 kubelet[3187]: E0715 05:18:45.082930 3187 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4396.0.0-n-11ebebb5c9\" already exists" pod="kube-system/kube-apiserver-ci-4396.0.0-n-11ebebb5c9" Jul 15 05:18:45.104563 kubelet[3187]: I0715 05:18:45.104515 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4396.0.0-n-11ebebb5c9" podStartSLOduration=3.104482162 podStartE2EDuration="3.104482162s" podCreationTimestamp="2025-07-15 05:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:18:45.091165588 +0000 UTC m=+1.125513843" watchObservedRunningTime="2025-07-15 05:18:45.104482162 +0000 UTC m=+1.138830549" Jul 15 05:18:45.104889 kubelet[3187]: I0715 05:18:45.104699 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-11ebebb5c9" podStartSLOduration=1.104690435 podStartE2EDuration="1.104690435s" podCreationTimestamp="2025-07-15 05:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:18:45.104384625 +0000 UTC m=+1.138732883" watchObservedRunningTime="2025-07-15 05:18:45.104690435 +0000 UTC m=+1.139038685" Jul 15 05:18:45.130083 kubelet[3187]: I0715 05:18:45.130038 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4396.0.0-n-11ebebb5c9" podStartSLOduration=1.130027188 podStartE2EDuration="1.130027188s" podCreationTimestamp="2025-07-15 05:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:18:45.114567779 +0000 UTC m=+1.148916038" watchObservedRunningTime="2025-07-15 05:18:45.130027188 +0000 UTC m=+1.164375440" Jul 15 05:18:48.086199 kubelet[3187]: I0715 05:18:48.086170 3187 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 05:18:48.087164 containerd[1750]: time="2025-07-15T05:18:48.087056939Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 05:18:48.087789 kubelet[3187]: I0715 05:18:48.087763 3187 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 05:18:48.971766 systemd[1]: Created slice kubepods-besteffort-pod598d9cd9_e52f_4595_8b5f_9eb3d1189b52.slice - libcontainer container kubepods-besteffort-pod598d9cd9_e52f_4595_8b5f_9eb3d1189b52.slice. Jul 15 05:18:49.064530 kubelet[3187]: I0715 05:18:49.064487 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/598d9cd9-e52f-4595-8b5f-9eb3d1189b52-kube-proxy\") pod \"kube-proxy-2n86t\" (UID: \"598d9cd9-e52f-4595-8b5f-9eb3d1189b52\") " pod="kube-system/kube-proxy-2n86t" Jul 15 05:18:49.064530 kubelet[3187]: I0715 05:18:49.064526 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/598d9cd9-e52f-4595-8b5f-9eb3d1189b52-xtables-lock\") pod \"kube-proxy-2n86t\" (UID: \"598d9cd9-e52f-4595-8b5f-9eb3d1189b52\") " pod="kube-system/kube-proxy-2n86t" Jul 15 05:18:49.064670 kubelet[3187]: I0715 05:18:49.064557 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/598d9cd9-e52f-4595-8b5f-9eb3d1189b52-lib-modules\") pod \"kube-proxy-2n86t\" (UID: \"598d9cd9-e52f-4595-8b5f-9eb3d1189b52\") " pod="kube-system/kube-proxy-2n86t" Jul 15 05:18:49.064670 kubelet[3187]: I0715 05:18:49.064593 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnz6f\" (UniqueName: \"kubernetes.io/projected/598d9cd9-e52f-4595-8b5f-9eb3d1189b52-kube-api-access-dnz6f\") pod \"kube-proxy-2n86t\" (UID: \"598d9cd9-e52f-4595-8b5f-9eb3d1189b52\") " pod="kube-system/kube-proxy-2n86t" Jul 15 05:18:49.145809 systemd[1]: Created slice kubepods-besteffort-podd59f8c1d_c096_41dc_a70f_e189ba193e6b.slice - libcontainer container kubepods-besteffort-podd59f8c1d_c096_41dc_a70f_e189ba193e6b.slice. Jul 15 05:18:49.165672 kubelet[3187]: I0715 05:18:49.165362 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dktt9\" (UniqueName: \"kubernetes.io/projected/d59f8c1d-c096-41dc-a70f-e189ba193e6b-kube-api-access-dktt9\") pod \"tigera-operator-747864d56d-mj2sh\" (UID: \"d59f8c1d-c096-41dc-a70f-e189ba193e6b\") " pod="tigera-operator/tigera-operator-747864d56d-mj2sh" Jul 15 05:18:49.165672 kubelet[3187]: I0715 05:18:49.165402 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d59f8c1d-c096-41dc-a70f-e189ba193e6b-var-lib-calico\") pod \"tigera-operator-747864d56d-mj2sh\" (UID: \"d59f8c1d-c096-41dc-a70f-e189ba193e6b\") " pod="tigera-operator/tigera-operator-747864d56d-mj2sh" Jul 15 05:18:49.281645 containerd[1750]: time="2025-07-15T05:18:49.281575721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2n86t,Uid:598d9cd9-e52f-4595-8b5f-9eb3d1189b52,Namespace:kube-system,Attempt:0,}" Jul 15 05:18:49.315165 containerd[1750]: time="2025-07-15T05:18:49.315110586Z" level=info msg="connecting to shim 263a1cb4d14aebd20786ccca8dcdde43860a1799c4180079b149fdbdc0736bc5" address="unix:///run/containerd/s/b4e7de0a4672c9bc337b85750df4f139c0759a5823a250f5eb56dceb4ae2ed03" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:18:49.335897 systemd[1]: Started cri-containerd-263a1cb4d14aebd20786ccca8dcdde43860a1799c4180079b149fdbdc0736bc5.scope - libcontainer container 263a1cb4d14aebd20786ccca8dcdde43860a1799c4180079b149fdbdc0736bc5. Jul 15 05:18:49.355655 containerd[1750]: time="2025-07-15T05:18:49.355618403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2n86t,Uid:598d9cd9-e52f-4595-8b5f-9eb3d1189b52,Namespace:kube-system,Attempt:0,} returns sandbox id \"263a1cb4d14aebd20786ccca8dcdde43860a1799c4180079b149fdbdc0736bc5\"" Jul 15 05:18:49.358219 containerd[1750]: time="2025-07-15T05:18:49.358191555Z" level=info msg="CreateContainer within sandbox \"263a1cb4d14aebd20786ccca8dcdde43860a1799c4180079b149fdbdc0736bc5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 05:18:49.377754 containerd[1750]: time="2025-07-15T05:18:49.377618162Z" level=info msg="Container 0ee331c6d0be629d0c1885412b63649889e837fa1718808b02280071708fa2a0: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:18:49.391110 containerd[1750]: time="2025-07-15T05:18:49.391085901Z" level=info msg="CreateContainer within sandbox \"263a1cb4d14aebd20786ccca8dcdde43860a1799c4180079b149fdbdc0736bc5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0ee331c6d0be629d0c1885412b63649889e837fa1718808b02280071708fa2a0\"" Jul 15 05:18:49.391485 containerd[1750]: time="2025-07-15T05:18:49.391465961Z" level=info msg="StartContainer for \"0ee331c6d0be629d0c1885412b63649889e837fa1718808b02280071708fa2a0\"" Jul 15 05:18:49.392840 containerd[1750]: time="2025-07-15T05:18:49.392798504Z" level=info msg="connecting to shim 0ee331c6d0be629d0c1885412b63649889e837fa1718808b02280071708fa2a0" address="unix:///run/containerd/s/b4e7de0a4672c9bc337b85750df4f139c0759a5823a250f5eb56dceb4ae2ed03" protocol=ttrpc version=3 Jul 15 05:18:49.408869 systemd[1]: Started cri-containerd-0ee331c6d0be629d0c1885412b63649889e837fa1718808b02280071708fa2a0.scope - libcontainer container 0ee331c6d0be629d0c1885412b63649889e837fa1718808b02280071708fa2a0. Jul 15 05:18:49.435068 containerd[1750]: time="2025-07-15T05:18:49.435043169Z" level=info msg="StartContainer for \"0ee331c6d0be629d0c1885412b63649889e837fa1718808b02280071708fa2a0\" returns successfully" Jul 15 05:18:49.451972 containerd[1750]: time="2025-07-15T05:18:49.451813139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-mj2sh,Uid:d59f8c1d-c096-41dc-a70f-e189ba193e6b,Namespace:tigera-operator,Attempt:0,}" Jul 15 05:18:49.483328 containerd[1750]: time="2025-07-15T05:18:49.483304158Z" level=info msg="connecting to shim 805c81fca01e64590e128d55ca487ac03b5aa212d7661a4369a2b69c3debb912" address="unix:///run/containerd/s/343424b3d2637fa81a23aa3c32883ed80642f65289ce45567dcc1569bc4caab1" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:18:49.504880 systemd[1]: Started cri-containerd-805c81fca01e64590e128d55ca487ac03b5aa212d7661a4369a2b69c3debb912.scope - libcontainer container 805c81fca01e64590e128d55ca487ac03b5aa212d7661a4369a2b69c3debb912. Jul 15 05:18:49.548406 containerd[1750]: time="2025-07-15T05:18:49.548187278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-mj2sh,Uid:d59f8c1d-c096-41dc-a70f-e189ba193e6b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"805c81fca01e64590e128d55ca487ac03b5aa212d7661a4369a2b69c3debb912\"" Jul 15 05:18:49.549552 containerd[1750]: time="2025-07-15T05:18:49.549518572Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 05:18:50.090318 kubelet[3187]: I0715 05:18:50.090211 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2n86t" podStartSLOduration=2.090196357 podStartE2EDuration="2.090196357s" podCreationTimestamp="2025-07-15 05:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:18:50.090035702 +0000 UTC m=+6.124383958" watchObservedRunningTime="2025-07-15 05:18:50.090196357 +0000 UTC m=+6.124544605" Jul 15 05:18:51.372787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2533696671.mount: Deactivated successfully. Jul 15 05:18:51.783246 containerd[1750]: time="2025-07-15T05:18:51.783199147Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:51.785638 containerd[1750]: time="2025-07-15T05:18:51.785609135Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 15 05:18:51.788124 containerd[1750]: time="2025-07-15T05:18:51.788089651Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:51.791601 containerd[1750]: time="2025-07-15T05:18:51.791556921Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:18:51.791955 containerd[1750]: time="2025-07-15T05:18:51.791870977Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.242241262s" Jul 15 05:18:51.791955 containerd[1750]: time="2025-07-15T05:18:51.791895002Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 05:18:51.793778 containerd[1750]: time="2025-07-15T05:18:51.793489325Z" level=info msg="CreateContainer within sandbox \"805c81fca01e64590e128d55ca487ac03b5aa212d7661a4369a2b69c3debb912\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 05:18:51.812878 containerd[1750]: time="2025-07-15T05:18:51.812853535Z" level=info msg="Container 30d4fd9f20c879d7d4cf52c9d3909a5d96e671f8ee2a9e1864fad4b625ddd92e: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:18:51.831126 containerd[1750]: time="2025-07-15T05:18:51.831103679Z" level=info msg="CreateContainer within sandbox \"805c81fca01e64590e128d55ca487ac03b5aa212d7661a4369a2b69c3debb912\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"30d4fd9f20c879d7d4cf52c9d3909a5d96e671f8ee2a9e1864fad4b625ddd92e\"" Jul 15 05:18:51.831491 containerd[1750]: time="2025-07-15T05:18:51.831453968Z" level=info msg="StartContainer for \"30d4fd9f20c879d7d4cf52c9d3909a5d96e671f8ee2a9e1864fad4b625ddd92e\"" Jul 15 05:18:51.832444 containerd[1750]: time="2025-07-15T05:18:51.832186999Z" level=info msg="connecting to shim 30d4fd9f20c879d7d4cf52c9d3909a5d96e671f8ee2a9e1864fad4b625ddd92e" address="unix:///run/containerd/s/343424b3d2637fa81a23aa3c32883ed80642f65289ce45567dcc1569bc4caab1" protocol=ttrpc version=3 Jul 15 05:18:51.846869 systemd[1]: Started cri-containerd-30d4fd9f20c879d7d4cf52c9d3909a5d96e671f8ee2a9e1864fad4b625ddd92e.scope - libcontainer container 30d4fd9f20c879d7d4cf52c9d3909a5d96e671f8ee2a9e1864fad4b625ddd92e. Jul 15 05:18:51.871490 containerd[1750]: time="2025-07-15T05:18:51.871469917Z" level=info msg="StartContainer for \"30d4fd9f20c879d7d4cf52c9d3909a5d96e671f8ee2a9e1864fad4b625ddd92e\" returns successfully" Jul 15 05:18:52.349539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631749589.mount: Deactivated successfully. Jul 15 05:18:54.071293 kubelet[3187]: I0715 05:18:54.071241 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-mj2sh" podStartSLOduration=2.827850467 podStartE2EDuration="5.071224139s" podCreationTimestamp="2025-07-15 05:18:49 +0000 UTC" firstStartedPulling="2025-07-15 05:18:49.549103428 +0000 UTC m=+5.583451678" lastFinishedPulling="2025-07-15 05:18:51.792477094 +0000 UTC m=+7.826825350" observedRunningTime="2025-07-15 05:18:52.092837643 +0000 UTC m=+8.127185920" watchObservedRunningTime="2025-07-15 05:18:54.071224139 +0000 UTC m=+10.105572429" Jul 15 05:18:57.190284 sudo[2211]: pam_unix(sudo:session): session closed for user root Jul 15 05:18:57.290800 sshd[2210]: Connection closed by 10.200.16.10 port 34312 Jul 15 05:18:57.291880 sshd-session[2207]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:57.296313 systemd[1]: sshd@6-10.200.8.39:22-10.200.16.10:34312.service: Deactivated successfully. Jul 15 05:18:57.299519 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 05:18:57.299882 systemd[1]: session-9.scope: Consumed 2.901s CPU time, 222.3M memory peak. Jul 15 05:18:57.302049 systemd-logind[1724]: Session 9 logged out. Waiting for processes to exit. Jul 15 05:18:57.305186 systemd-logind[1724]: Removed session 9. Jul 15 05:19:00.011150 systemd[1]: Created slice kubepods-besteffort-pod1b859c68_d04e_4fd9_af50_cfefa0b1f600.slice - libcontainer container kubepods-besteffort-pod1b859c68_d04e_4fd9_af50_cfefa0b1f600.slice. Jul 15 05:19:00.031504 kubelet[3187]: I0715 05:19:00.031472 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-525ck\" (UniqueName: \"kubernetes.io/projected/1b859c68-d04e-4fd9-af50-cfefa0b1f600-kube-api-access-525ck\") pod \"calico-typha-6ff8c7db4d-ngqt4\" (UID: \"1b859c68-d04e-4fd9-af50-cfefa0b1f600\") " pod="calico-system/calico-typha-6ff8c7db4d-ngqt4" Jul 15 05:19:00.031800 kubelet[3187]: I0715 05:19:00.031525 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b859c68-d04e-4fd9-af50-cfefa0b1f600-tigera-ca-bundle\") pod \"calico-typha-6ff8c7db4d-ngqt4\" (UID: \"1b859c68-d04e-4fd9-af50-cfefa0b1f600\") " pod="calico-system/calico-typha-6ff8c7db4d-ngqt4" Jul 15 05:19:00.031800 kubelet[3187]: I0715 05:19:00.031542 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1b859c68-d04e-4fd9-af50-cfefa0b1f600-typha-certs\") pod \"calico-typha-6ff8c7db4d-ngqt4\" (UID: \"1b859c68-d04e-4fd9-af50-cfefa0b1f600\") " pod="calico-system/calico-typha-6ff8c7db4d-ngqt4" Jul 15 05:19:00.321478 containerd[1750]: time="2025-07-15T05:19:00.321374701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ff8c7db4d-ngqt4,Uid:1b859c68-d04e-4fd9-af50-cfefa0b1f600,Namespace:calico-system,Attempt:0,}" Jul 15 05:19:00.361226 containerd[1750]: time="2025-07-15T05:19:00.361183839Z" level=info msg="connecting to shim c1be055e67f86ebf2c2963d7dfd5048ce6d07e9b370b02b519431eb639047549" address="unix:///run/containerd/s/7267eb09726140a39148be43a269b3d1582c9a91621c5d46a744b9c43605be1c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:19:00.392324 systemd[1]: Started cri-containerd-c1be055e67f86ebf2c2963d7dfd5048ce6d07e9b370b02b519431eb639047549.scope - libcontainer container c1be055e67f86ebf2c2963d7dfd5048ce6d07e9b370b02b519431eb639047549. Jul 15 05:19:00.401673 systemd[1]: Created slice kubepods-besteffort-pode733d96c_1cef_450f_91d3_b71927f9fd26.slice - libcontainer container kubepods-besteffort-pode733d96c_1cef_450f_91d3_b71927f9fd26.slice. Jul 15 05:19:00.433519 kubelet[3187]: I0715 05:19:00.433415 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e733d96c-1cef-450f-91d3-b71927f9fd26-lib-modules\") pod \"calico-node-hzbpx\" (UID: \"e733d96c-1cef-450f-91d3-b71927f9fd26\") " pod="calico-system/calico-node-hzbpx" Jul 15 05:19:00.433838 kubelet[3187]: I0715 05:19:00.433812 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e733d96c-1cef-450f-91d3-b71927f9fd26-tigera-ca-bundle\") pod \"calico-node-hzbpx\" (UID: \"e733d96c-1cef-450f-91d3-b71927f9fd26\") " pod="calico-system/calico-node-hzbpx" Jul 15 05:19:00.434006 kubelet[3187]: I0715 05:19:00.433996 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e733d96c-1cef-450f-91d3-b71927f9fd26-xtables-lock\") pod \"calico-node-hzbpx\" (UID: \"e733d96c-1cef-450f-91d3-b71927f9fd26\") " pod="calico-system/calico-node-hzbpx" Jul 15 05:19:00.434283 kubelet[3187]: I0715 05:19:00.434137 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e733d96c-1cef-450f-91d3-b71927f9fd26-var-lib-calico\") pod \"calico-node-hzbpx\" (UID: \"e733d96c-1cef-450f-91d3-b71927f9fd26\") " pod="calico-system/calico-node-hzbpx" Jul 15 05:19:00.434283 kubelet[3187]: I0715 05:19:00.434156 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e733d96c-1cef-450f-91d3-b71927f9fd26-var-run-calico\") pod \"calico-node-hzbpx\" (UID: \"e733d96c-1cef-450f-91d3-b71927f9fd26\") " pod="calico-system/calico-node-hzbpx" Jul 15 05:19:00.434283 kubelet[3187]: I0715 05:19:00.434170 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr75n\" (UniqueName: \"kubernetes.io/projected/e733d96c-1cef-450f-91d3-b71927f9fd26-kube-api-access-gr75n\") pod \"calico-node-hzbpx\" (UID: \"e733d96c-1cef-450f-91d3-b71927f9fd26\") " pod="calico-system/calico-node-hzbpx" Jul 15 05:19:00.434756 kubelet[3187]: I0715 05:19:00.434415 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e733d96c-1cef-450f-91d3-b71927f9fd26-flexvol-driver-host\") pod \"calico-node-hzbpx\" (UID: \"e733d96c-1cef-450f-91d3-b71927f9fd26\") " pod="calico-system/calico-node-hzbpx" Jul 15 05:19:00.434756 kubelet[3187]: I0715 05:19:00.434446 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e733d96c-1cef-450f-91d3-b71927f9fd26-policysync\") pod \"calico-node-hzbpx\" (UID: \"e733d96c-1cef-450f-91d3-b71927f9fd26\") " pod="calico-system/calico-node-hzbpx" Jul 15 05:19:00.434756 kubelet[3187]: I0715 05:19:00.434464 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e733d96c-1cef-450f-91d3-b71927f9fd26-cni-bin-dir\") pod \"calico-node-hzbpx\" (UID: \"e733d96c-1cef-450f-91d3-b71927f9fd26\") " pod="calico-system/calico-node-hzbpx" Jul 15 05:19:00.434947 kubelet[3187]: I0715 05:19:00.434480 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e733d96c-1cef-450f-91d3-b71927f9fd26-cni-log-dir\") pod \"calico-node-hzbpx\" (UID: \"e733d96c-1cef-450f-91d3-b71927f9fd26\") " pod="calico-system/calico-node-hzbpx" Jul 15 05:19:00.434947 kubelet[3187]: I0715 05:19:00.434910 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e733d96c-1cef-450f-91d3-b71927f9fd26-cni-net-dir\") pod \"calico-node-hzbpx\" (UID: \"e733d96c-1cef-450f-91d3-b71927f9fd26\") " pod="calico-system/calico-node-hzbpx" Jul 15 05:19:00.434947 kubelet[3187]: I0715 05:19:00.434929 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e733d96c-1cef-450f-91d3-b71927f9fd26-node-certs\") pod \"calico-node-hzbpx\" (UID: \"e733d96c-1cef-450f-91d3-b71927f9fd26\") " pod="calico-system/calico-node-hzbpx" Jul 15 05:19:00.454522 containerd[1750]: time="2025-07-15T05:19:00.454470297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ff8c7db4d-ngqt4,Uid:1b859c68-d04e-4fd9-af50-cfefa0b1f600,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1be055e67f86ebf2c2963d7dfd5048ce6d07e9b370b02b519431eb639047549\"" Jul 15 05:19:00.456122 containerd[1750]: time="2025-07-15T05:19:00.456101208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 05:19:00.537495 kubelet[3187]: E0715 05:19:00.537354 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.537495 kubelet[3187]: W0715 05:19:00.537371 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.537697 kubelet[3187]: E0715 05:19:00.537402 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.544150 kubelet[3187]: E0715 05:19:00.544099 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.544431 kubelet[3187]: W0715 05:19:00.544301 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.544431 kubelet[3187]: E0715 05:19:00.544320 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.545204 kubelet[3187]: E0715 05:19:00.545069 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.545204 kubelet[3187]: W0715 05:19:00.545081 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.545204 kubelet[3187]: E0715 05:19:00.545094 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.698176 kubelet[3187]: E0715 05:19:00.698130 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ltwxk" podUID="41438ff4-6284-45a5-8adf-d0024b23fdfa" Jul 15 05:19:00.707844 containerd[1750]: time="2025-07-15T05:19:00.707808566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hzbpx,Uid:e733d96c-1cef-450f-91d3-b71927f9fd26,Namespace:calico-system,Attempt:0,}" Jul 15 05:19:00.732326 kubelet[3187]: E0715 05:19:00.732311 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.732326 kubelet[3187]: W0715 05:19:00.732326 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.732414 kubelet[3187]: E0715 05:19:00.732338 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.732495 kubelet[3187]: E0715 05:19:00.732486 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.732521 kubelet[3187]: W0715 05:19:00.732496 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.732521 kubelet[3187]: E0715 05:19:00.732504 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.732648 kubelet[3187]: E0715 05:19:00.732639 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.732674 kubelet[3187]: W0715 05:19:00.732649 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.732674 kubelet[3187]: E0715 05:19:00.732657 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.733057 kubelet[3187]: E0715 05:19:00.732826 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.733057 kubelet[3187]: W0715 05:19:00.732838 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.733057 kubelet[3187]: E0715 05:19:00.732845 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.733057 kubelet[3187]: E0715 05:19:00.732962 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.733057 kubelet[3187]: W0715 05:19:00.732967 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.733057 kubelet[3187]: E0715 05:19:00.732975 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.733212 kubelet[3187]: E0715 05:19:00.733064 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.733212 kubelet[3187]: W0715 05:19:00.733069 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.733212 kubelet[3187]: E0715 05:19:00.733074 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.733212 kubelet[3187]: E0715 05:19:00.733148 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.733212 kubelet[3187]: W0715 05:19:00.733152 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.733212 kubelet[3187]: E0715 05:19:00.733157 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.733325 kubelet[3187]: E0715 05:19:00.733248 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.733325 kubelet[3187]: W0715 05:19:00.733252 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.733325 kubelet[3187]: E0715 05:19:00.733258 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.733401 kubelet[3187]: E0715 05:19:00.733339 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.733401 kubelet[3187]: W0715 05:19:00.733343 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.733401 kubelet[3187]: E0715 05:19:00.733348 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.733555 kubelet[3187]: E0715 05:19:00.733467 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.733555 kubelet[3187]: W0715 05:19:00.733474 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.733555 kubelet[3187]: E0715 05:19:00.733480 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.733555 kubelet[3187]: E0715 05:19:00.733555 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.733792 kubelet[3187]: W0715 05:19:00.733559 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.733792 kubelet[3187]: E0715 05:19:00.733565 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.733792 kubelet[3187]: E0715 05:19:00.733782 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.733792 kubelet[3187]: W0715 05:19:00.733789 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.733876 kubelet[3187]: E0715 05:19:00.733797 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.733924 kubelet[3187]: E0715 05:19:00.733913 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.733947 kubelet[3187]: W0715 05:19:00.733932 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.733947 kubelet[3187]: E0715 05:19:00.733939 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.734120 kubelet[3187]: E0715 05:19:00.734112 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.734120 kubelet[3187]: W0715 05:19:00.734120 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.734165 kubelet[3187]: E0715 05:19:00.734126 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.734294 kubelet[3187]: E0715 05:19:00.734286 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.734316 kubelet[3187]: W0715 05:19:00.734294 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.734316 kubelet[3187]: E0715 05:19:00.734299 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.734385 kubelet[3187]: E0715 05:19:00.734378 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.734405 kubelet[3187]: W0715 05:19:00.734386 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.734405 kubelet[3187]: E0715 05:19:00.734391 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.735007 kubelet[3187]: E0715 05:19:00.734989 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.735090 kubelet[3187]: W0715 05:19:00.735011 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.735090 kubelet[3187]: E0715 05:19:00.735024 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.735574 kubelet[3187]: E0715 05:19:00.735546 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.735574 kubelet[3187]: W0715 05:19:00.735564 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.735668 kubelet[3187]: E0715 05:19:00.735575 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.735821 kubelet[3187]: E0715 05:19:00.735810 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.735846 kubelet[3187]: W0715 05:19:00.735821 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.735846 kubelet[3187]: E0715 05:19:00.735831 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.736137 kubelet[3187]: E0715 05:19:00.736129 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.736137 kubelet[3187]: W0715 05:19:00.736137 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.736194 kubelet[3187]: E0715 05:19:00.736146 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.736403 kubelet[3187]: E0715 05:19:00.736391 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.736403 kubelet[3187]: W0715 05:19:00.736403 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.736453 kubelet[3187]: E0715 05:19:00.736411 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.736453 kubelet[3187]: I0715 05:19:00.736439 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/41438ff4-6284-45a5-8adf-d0024b23fdfa-socket-dir\") pod \"csi-node-driver-ltwxk\" (UID: \"41438ff4-6284-45a5-8adf-d0024b23fdfa\") " pod="calico-system/csi-node-driver-ltwxk" Jul 15 05:19:00.736731 kubelet[3187]: E0715 05:19:00.736712 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.736731 kubelet[3187]: W0715 05:19:00.736726 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.736816 kubelet[3187]: E0715 05:19:00.736766 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.736816 kubelet[3187]: I0715 05:19:00.736782 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41438ff4-6284-45a5-8adf-d0024b23fdfa-kubelet-dir\") pod \"csi-node-driver-ltwxk\" (UID: \"41438ff4-6284-45a5-8adf-d0024b23fdfa\") " pod="calico-system/csi-node-driver-ltwxk" Jul 15 05:19:00.737165 kubelet[3187]: E0715 05:19:00.737151 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.737787 kubelet[3187]: W0715 05:19:00.737164 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.737787 kubelet[3187]: E0715 05:19:00.737259 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.737787 kubelet[3187]: I0715 05:19:00.737277 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4jz7\" (UniqueName: \"kubernetes.io/projected/41438ff4-6284-45a5-8adf-d0024b23fdfa-kube-api-access-c4jz7\") pod \"csi-node-driver-ltwxk\" (UID: \"41438ff4-6284-45a5-8adf-d0024b23fdfa\") " pod="calico-system/csi-node-driver-ltwxk" Jul 15 05:19:00.737787 kubelet[3187]: E0715 05:19:00.737470 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.737787 kubelet[3187]: W0715 05:19:00.737476 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.737787 kubelet[3187]: E0715 05:19:00.737489 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.737787 kubelet[3187]: I0715 05:19:00.737501 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/41438ff4-6284-45a5-8adf-d0024b23fdfa-varrun\") pod \"csi-node-driver-ltwxk\" (UID: \"41438ff4-6284-45a5-8adf-d0024b23fdfa\") " pod="calico-system/csi-node-driver-ltwxk" Jul 15 05:19:00.737787 kubelet[3187]: E0715 05:19:00.737693 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.737970 kubelet[3187]: W0715 05:19:00.737698 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.737970 kubelet[3187]: E0715 05:19:00.737879 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.737970 kubelet[3187]: I0715 05:19:00.737898 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/41438ff4-6284-45a5-8adf-d0024b23fdfa-registration-dir\") pod \"csi-node-driver-ltwxk\" (UID: \"41438ff4-6284-45a5-8adf-d0024b23fdfa\") " pod="calico-system/csi-node-driver-ltwxk" Jul 15 05:19:00.738036 kubelet[3187]: E0715 05:19:00.738028 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.738036 kubelet[3187]: W0715 05:19:00.738035 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.738324 kubelet[3187]: E0715 05:19:00.738307 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.738539 kubelet[3187]: E0715 05:19:00.738527 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.738574 kubelet[3187]: W0715 05:19:00.738540 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.738731 kubelet[3187]: E0715 05:19:00.738719 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.738845 kubelet[3187]: E0715 05:19:00.738838 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.738880 kubelet[3187]: W0715 05:19:00.738846 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.738926 kubelet[3187]: E0715 05:19:00.738918 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.739015 kubelet[3187]: E0715 05:19:00.739008 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.739037 kubelet[3187]: W0715 05:19:00.739015 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.739101 kubelet[3187]: E0715 05:19:00.739088 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.739163 kubelet[3187]: E0715 05:19:00.739156 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.739804 kubelet[3187]: W0715 05:19:00.739164 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.739804 kubelet[3187]: E0715 05:19:00.739309 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.739804 kubelet[3187]: E0715 05:19:00.739339 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.739804 kubelet[3187]: W0715 05:19:00.739344 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.739804 kubelet[3187]: E0715 05:19:00.739349 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.739804 kubelet[3187]: E0715 05:19:00.739548 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.739804 kubelet[3187]: W0715 05:19:00.739553 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.739804 kubelet[3187]: E0715 05:19:00.739560 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.740002 kubelet[3187]: E0715 05:19:00.739904 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.740002 kubelet[3187]: W0715 05:19:00.739911 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.740002 kubelet[3187]: E0715 05:19:00.739921 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.740113 kubelet[3187]: E0715 05:19:00.740100 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.740145 kubelet[3187]: W0715 05:19:00.740113 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.740145 kubelet[3187]: E0715 05:19:00.740121 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.740346 kubelet[3187]: E0715 05:19:00.740338 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.740371 kubelet[3187]: W0715 05:19:00.740345 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.740371 kubelet[3187]: E0715 05:19:00.740353 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.750891 containerd[1750]: time="2025-07-15T05:19:00.750862269Z" level=info msg="connecting to shim 019734fcfbc89764f2c0262706ed5aa8379fe8d8d63b4f24ece9c44fd5bd9570" address="unix:///run/containerd/s/9517a8dd6b9f0d03840d0b2736e8e4b3fe68807a2df2ec68368a4cb16ce34380" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:19:00.787974 systemd[1]: Started cri-containerd-019734fcfbc89764f2c0262706ed5aa8379fe8d8d63b4f24ece9c44fd5bd9570.scope - libcontainer container 019734fcfbc89764f2c0262706ed5aa8379fe8d8d63b4f24ece9c44fd5bd9570. Jul 15 05:19:00.839004 kubelet[3187]: E0715 05:19:00.838909 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.839004 kubelet[3187]: W0715 05:19:00.838932 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.839004 kubelet[3187]: E0715 05:19:00.838943 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.839949 kubelet[3187]: E0715 05:19:00.839926 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.839949 kubelet[3187]: W0715 05:19:00.839937 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.840152 kubelet[3187]: E0715 05:19:00.840041 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.840205 kubelet[3187]: E0715 05:19:00.840200 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.840236 kubelet[3187]: W0715 05:19:00.840231 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.840303 kubelet[3187]: E0715 05:19:00.840276 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.840429 kubelet[3187]: E0715 05:19:00.840415 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.840429 kubelet[3187]: W0715 05:19:00.840421 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.840526 kubelet[3187]: E0715 05:19:00.840486 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.840614 kubelet[3187]: E0715 05:19:00.840602 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.840614 kubelet[3187]: W0715 05:19:00.840607 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.840712 kubelet[3187]: E0715 05:19:00.840665 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.841443 kubelet[3187]: E0715 05:19:00.841432 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.841586 kubelet[3187]: W0715 05:19:00.841511 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.841586 kubelet[3187]: E0715 05:19:00.841534 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.841840 kubelet[3187]: E0715 05:19:00.841832 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.841968 kubelet[3187]: W0715 05:19:00.841885 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.841968 kubelet[3187]: E0715 05:19:00.841896 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.842720 kubelet[3187]: E0715 05:19:00.842707 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.842855 kubelet[3187]: W0715 05:19:00.842809 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.842904 kubelet[3187]: E0715 05:19:00.842897 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.843076 kubelet[3187]: E0715 05:19:00.843060 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.843076 kubelet[3187]: W0715 05:19:00.843067 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.843809 kubelet[3187]: E0715 05:19:00.843379 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.844371 kubelet[3187]: E0715 05:19:00.843795 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.844371 kubelet[3187]: W0715 05:19:00.844182 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.844371 kubelet[3187]: E0715 05:19:00.844196 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.844672 kubelet[3187]: E0715 05:19:00.844652 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.844865 kubelet[3187]: W0715 05:19:00.844787 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.844981 kubelet[3187]: E0715 05:19:00.844969 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.845772 kubelet[3187]: E0715 05:19:00.845761 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.845833 kubelet[3187]: W0715 05:19:00.845825 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.845973 kubelet[3187]: E0715 05:19:00.845932 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.846080 kubelet[3187]: E0715 05:19:00.846074 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.846118 kubelet[3187]: W0715 05:19:00.846112 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.846266 kubelet[3187]: E0715 05:19:00.846237 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.846307 kubelet[3187]: E0715 05:19:00.846260 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.846359 kubelet[3187]: W0715 05:19:00.846340 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.846633 kubelet[3187]: E0715 05:19:00.846615 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.846806 kubelet[3187]: E0715 05:19:00.846785 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.846806 kubelet[3187]: W0715 05:19:00.846795 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.847828 kubelet[3187]: E0715 05:19:00.847777 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.848082 kubelet[3187]: E0715 05:19:00.848061 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.848082 kubelet[3187]: W0715 05:19:00.848072 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.848233 kubelet[3187]: E0715 05:19:00.848211 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.848283 kubelet[3187]: E0715 05:19:00.848278 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.848338 kubelet[3187]: W0715 05:19:00.848310 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.848430 kubelet[3187]: E0715 05:19:00.848416 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.848478 kubelet[3187]: E0715 05:19:00.848473 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.848524 kubelet[3187]: W0715 05:19:00.848502 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.848579 kubelet[3187]: E0715 05:19:00.848551 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.848652 kubelet[3187]: E0715 05:19:00.848640 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.848652 kubelet[3187]: W0715 05:19:00.848646 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.848778 kubelet[3187]: E0715 05:19:00.848758 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.848863 kubelet[3187]: E0715 05:19:00.848851 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.848863 kubelet[3187]: W0715 05:19:00.848857 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.848951 kubelet[3187]: E0715 05:19:00.848909 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.849759 kubelet[3187]: E0715 05:19:00.849048 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.849759 kubelet[3187]: W0715 05:19:00.849062 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.849963 kubelet[3187]: E0715 05:19:00.849866 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.850116 kubelet[3187]: E0715 05:19:00.850096 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.850116 kubelet[3187]: W0715 05:19:00.850105 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.850709 kubelet[3187]: E0715 05:19:00.850253 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.850845 kubelet[3187]: E0715 05:19:00.850836 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.850922 kubelet[3187]: W0715 05:19:00.850893 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.851930 kubelet[3187]: E0715 05:19:00.851914 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.852110 kubelet[3187]: E0715 05:19:00.852102 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.852592 kubelet[3187]: W0715 05:19:00.852574 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.852786 kubelet[3187]: E0715 05:19:00.852663 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.853719 kubelet[3187]: E0715 05:19:00.853705 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.853816 kubelet[3187]: W0715 05:19:00.853806 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.853888 kubelet[3187]: E0715 05:19:00.853863 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.863944 kubelet[3187]: E0715 05:19:00.863843 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:00.863944 kubelet[3187]: W0715 05:19:00.863909 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:00.863944 kubelet[3187]: E0715 05:19:00.863921 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:00.900768 containerd[1750]: time="2025-07-15T05:19:00.900022935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hzbpx,Uid:e733d96c-1cef-450f-91d3-b71927f9fd26,Namespace:calico-system,Attempt:0,} returns sandbox id \"019734fcfbc89764f2c0262706ed5aa8379fe8d8d63b4f24ece9c44fd5bd9570\"" Jul 15 05:19:01.820861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4087317733.mount: Deactivated successfully. Jul 15 05:19:02.034969 kubelet[3187]: E0715 05:19:02.034872 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ltwxk" podUID="41438ff4-6284-45a5-8adf-d0024b23fdfa" Jul 15 05:19:02.675214 containerd[1750]: time="2025-07-15T05:19:02.675181801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:02.677320 containerd[1750]: time="2025-07-15T05:19:02.677300168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 15 05:19:02.681836 containerd[1750]: time="2025-07-15T05:19:02.681801129Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:02.684974 containerd[1750]: time="2025-07-15T05:19:02.684928480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:02.685312 containerd[1750]: time="2025-07-15T05:19:02.685228249Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.228944761s" Jul 15 05:19:02.685312 containerd[1750]: time="2025-07-15T05:19:02.685252237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 05:19:02.686064 containerd[1750]: time="2025-07-15T05:19:02.685894196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 05:19:02.697965 containerd[1750]: time="2025-07-15T05:19:02.697588774Z" level=info msg="CreateContainer within sandbox \"c1be055e67f86ebf2c2963d7dfd5048ce6d07e9b370b02b519431eb639047549\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 05:19:02.717959 containerd[1750]: time="2025-07-15T05:19:02.717256095Z" level=info msg="Container e393f2b0de0de49fcb3bd1599f9d622198fc1fe4d1c547939be5186b6d65b1e1: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:02.732572 containerd[1750]: time="2025-07-15T05:19:02.732550718Z" level=info msg="CreateContainer within sandbox \"c1be055e67f86ebf2c2963d7dfd5048ce6d07e9b370b02b519431eb639047549\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e393f2b0de0de49fcb3bd1599f9d622198fc1fe4d1c547939be5186b6d65b1e1\"" Jul 15 05:19:02.732890 containerd[1750]: time="2025-07-15T05:19:02.732874136Z" level=info msg="StartContainer for \"e393f2b0de0de49fcb3bd1599f9d622198fc1fe4d1c547939be5186b6d65b1e1\"" Jul 15 05:19:02.734128 containerd[1750]: time="2025-07-15T05:19:02.733997600Z" level=info msg="connecting to shim e393f2b0de0de49fcb3bd1599f9d622198fc1fe4d1c547939be5186b6d65b1e1" address="unix:///run/containerd/s/7267eb09726140a39148be43a269b3d1582c9a91621c5d46a744b9c43605be1c" protocol=ttrpc version=3 Jul 15 05:19:02.749856 systemd[1]: Started cri-containerd-e393f2b0de0de49fcb3bd1599f9d622198fc1fe4d1c547939be5186b6d65b1e1.scope - libcontainer container e393f2b0de0de49fcb3bd1599f9d622198fc1fe4d1c547939be5186b6d65b1e1. Jul 15 05:19:02.786163 containerd[1750]: time="2025-07-15T05:19:02.786101884Z" level=info msg="StartContainer for \"e393f2b0de0de49fcb3bd1599f9d622198fc1fe4d1c547939be5186b6d65b1e1\" returns successfully" Jul 15 05:19:03.149518 kubelet[3187]: E0715 05:19:03.149447 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.149518 kubelet[3187]: W0715 05:19:03.149470 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.149518 kubelet[3187]: E0715 05:19:03.149487 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.149812 kubelet[3187]: E0715 05:19:03.149595 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.149812 kubelet[3187]: W0715 05:19:03.149600 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.149812 kubelet[3187]: E0715 05:19:03.149606 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.149812 kubelet[3187]: E0715 05:19:03.149694 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.149812 kubelet[3187]: W0715 05:19:03.149699 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.149812 kubelet[3187]: E0715 05:19:03.149705 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.149937 kubelet[3187]: E0715 05:19:03.149842 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.149937 kubelet[3187]: W0715 05:19:03.149847 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.149937 kubelet[3187]: E0715 05:19:03.149854 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.149989 kubelet[3187]: E0715 05:19:03.149939 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.149989 kubelet[3187]: W0715 05:19:03.149944 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.149989 kubelet[3187]: E0715 05:19:03.149950 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.150523 kubelet[3187]: E0715 05:19:03.150509 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.150523 kubelet[3187]: W0715 05:19:03.150522 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.150609 kubelet[3187]: E0715 05:19:03.150535 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.150643 kubelet[3187]: E0715 05:19:03.150632 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.150680 kubelet[3187]: W0715 05:19:03.150644 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.150680 kubelet[3187]: E0715 05:19:03.150651 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.150875 kubelet[3187]: E0715 05:19:03.150865 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.150925 kubelet[3187]: W0715 05:19:03.150876 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.150925 kubelet[3187]: E0715 05:19:03.150886 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.151002 kubelet[3187]: E0715 05:19:03.150993 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.151023 kubelet[3187]: W0715 05:19:03.151003 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.151023 kubelet[3187]: E0715 05:19:03.151009 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.151271 kubelet[3187]: E0715 05:19:03.151259 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.151301 kubelet[3187]: W0715 05:19:03.151272 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.151301 kubelet[3187]: E0715 05:19:03.151284 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.151393 kubelet[3187]: E0715 05:19:03.151382 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.151393 kubelet[3187]: W0715 05:19:03.151390 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.151538 kubelet[3187]: E0715 05:19:03.151397 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.151538 kubelet[3187]: E0715 05:19:03.151478 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.151538 kubelet[3187]: W0715 05:19:03.151483 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.151538 kubelet[3187]: E0715 05:19:03.151488 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.151670 kubelet[3187]: E0715 05:19:03.151608 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.151670 kubelet[3187]: W0715 05:19:03.151613 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.151670 kubelet[3187]: E0715 05:19:03.151619 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.151882 kubelet[3187]: E0715 05:19:03.151694 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.151882 kubelet[3187]: W0715 05:19:03.151699 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.151882 kubelet[3187]: E0715 05:19:03.151704 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.151882 kubelet[3187]: E0715 05:19:03.151858 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.151882 kubelet[3187]: W0715 05:19:03.151865 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.151882 kubelet[3187]: E0715 05:19:03.151873 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.159995 kubelet[3187]: E0715 05:19:03.159975 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.160070 kubelet[3187]: W0715 05:19:03.159998 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.160070 kubelet[3187]: E0715 05:19:03.160013 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.160164 kubelet[3187]: E0715 05:19:03.160138 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.160164 kubelet[3187]: W0715 05:19:03.160143 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.160164 kubelet[3187]: E0715 05:19:03.160151 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.160464 kubelet[3187]: E0715 05:19:03.160452 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.160464 kubelet[3187]: W0715 05:19:03.160464 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.160534 kubelet[3187]: E0715 05:19:03.160481 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.160780 kubelet[3187]: E0715 05:19:03.160767 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.160921 kubelet[3187]: W0715 05:19:03.160781 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.160921 kubelet[3187]: E0715 05:19:03.160918 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.161141 kubelet[3187]: E0715 05:19:03.161121 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.161141 kubelet[3187]: W0715 05:19:03.161135 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.161547 kubelet[3187]: E0715 05:19:03.161528 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.161871 kubelet[3187]: E0715 05:19:03.161853 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.161871 kubelet[3187]: W0715 05:19:03.161867 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.162746 kubelet[3187]: E0715 05:19:03.162716 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.162746 kubelet[3187]: W0715 05:19:03.162745 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.163516 kubelet[3187]: E0715 05:19:03.163500 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.163597 kubelet[3187]: E0715 05:19:03.163524 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.163597 kubelet[3187]: E0715 05:19:03.163593 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.163643 kubelet[3187]: W0715 05:19:03.163600 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.163643 kubelet[3187]: E0715 05:19:03.163616 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.163759 kubelet[3187]: E0715 05:19:03.163749 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.163788 kubelet[3187]: W0715 05:19:03.163759 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.163788 kubelet[3187]: E0715 05:19:03.163772 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.165465 kubelet[3187]: E0715 05:19:03.165447 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.165465 kubelet[3187]: W0715 05:19:03.165465 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.165569 kubelet[3187]: E0715 05:19:03.165488 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.165665 kubelet[3187]: E0715 05:19:03.165656 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.165692 kubelet[3187]: W0715 05:19:03.165665 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.165692 kubelet[3187]: E0715 05:19:03.165674 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.165965 kubelet[3187]: E0715 05:19:03.165953 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.165965 kubelet[3187]: W0715 05:19:03.165965 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.166119 kubelet[3187]: E0715 05:19:03.165975 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.166343 kubelet[3187]: E0715 05:19:03.166321 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.166343 kubelet[3187]: W0715 05:19:03.166334 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.166503 kubelet[3187]: E0715 05:19:03.166344 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.166503 kubelet[3187]: E0715 05:19:03.166492 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.166503 kubelet[3187]: W0715 05:19:03.166498 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.166564 kubelet[3187]: E0715 05:19:03.166515 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.166667 kubelet[3187]: E0715 05:19:03.166614 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.166667 kubelet[3187]: W0715 05:19:03.166620 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.166667 kubelet[3187]: E0715 05:19:03.166631 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.167930 kubelet[3187]: E0715 05:19:03.167908 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.167930 kubelet[3187]: W0715 05:19:03.167923 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.168063 kubelet[3187]: E0715 05:19:03.168049 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.168803 kubelet[3187]: E0715 05:19:03.168780 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.168803 kubelet[3187]: W0715 05:19:03.168796 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.168890 kubelet[3187]: E0715 05:19:03.168885 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:03.169072 kubelet[3187]: E0715 05:19:03.169062 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:03.169113 kubelet[3187]: W0715 05:19:03.169072 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:03.169113 kubelet[3187]: E0715 05:19:03.169082 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.035269 kubelet[3187]: E0715 05:19:04.035245 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ltwxk" podUID="41438ff4-6284-45a5-8adf-d0024b23fdfa" Jul 15 05:19:04.106983 kubelet[3187]: I0715 05:19:04.106960 3187 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:19:04.156681 kubelet[3187]: E0715 05:19:04.156665 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.156681 kubelet[3187]: W0715 05:19:04.156679 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.156996 kubelet[3187]: E0715 05:19:04.156692 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.156996 kubelet[3187]: E0715 05:19:04.156802 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.156996 kubelet[3187]: W0715 05:19:04.156808 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.156996 kubelet[3187]: E0715 05:19:04.156817 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.156996 kubelet[3187]: E0715 05:19:04.156920 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.156996 kubelet[3187]: W0715 05:19:04.156925 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.156996 kubelet[3187]: E0715 05:19:04.156931 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.157154 kubelet[3187]: E0715 05:19:04.157021 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.157154 kubelet[3187]: W0715 05:19:04.157026 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.157154 kubelet[3187]: E0715 05:19:04.157032 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.157154 kubelet[3187]: E0715 05:19:04.157116 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.157154 kubelet[3187]: W0715 05:19:04.157120 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.157154 kubelet[3187]: E0715 05:19:04.157126 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.157287 kubelet[3187]: E0715 05:19:04.157197 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.157287 kubelet[3187]: W0715 05:19:04.157201 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.157287 kubelet[3187]: E0715 05:19:04.157207 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.157287 kubelet[3187]: E0715 05:19:04.157287 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.157376 kubelet[3187]: W0715 05:19:04.157292 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.157376 kubelet[3187]: E0715 05:19:04.157297 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.157376 kubelet[3187]: E0715 05:19:04.157367 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.157376 kubelet[3187]: W0715 05:19:04.157371 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.157464 kubelet[3187]: E0715 05:19:04.157376 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.157464 kubelet[3187]: E0715 05:19:04.157453 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.157464 kubelet[3187]: W0715 05:19:04.157456 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.157464 kubelet[3187]: E0715 05:19:04.157461 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.157551 kubelet[3187]: E0715 05:19:04.157529 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.157551 kubelet[3187]: W0715 05:19:04.157533 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.157551 kubelet[3187]: E0715 05:19:04.157539 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.157617 kubelet[3187]: E0715 05:19:04.157604 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.157640 kubelet[3187]: W0715 05:19:04.157616 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.157640 kubelet[3187]: E0715 05:19:04.157623 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.157708 kubelet[3187]: E0715 05:19:04.157694 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.157708 kubelet[3187]: W0715 05:19:04.157703 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.157767 kubelet[3187]: E0715 05:19:04.157708 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.157808 kubelet[3187]: E0715 05:19:04.157795 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.157808 kubelet[3187]: W0715 05:19:04.157802 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.157853 kubelet[3187]: E0715 05:19:04.157808 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.157894 kubelet[3187]: E0715 05:19:04.157886 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.157894 kubelet[3187]: W0715 05:19:04.157892 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.157934 kubelet[3187]: E0715 05:19:04.157897 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.158067 kubelet[3187]: E0715 05:19:04.158059 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.158139 kubelet[3187]: W0715 05:19:04.158098 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.158139 kubelet[3187]: E0715 05:19:04.158106 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.166452 kubelet[3187]: E0715 05:19:04.166440 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.166634 kubelet[3187]: W0715 05:19:04.166531 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.166634 kubelet[3187]: E0715 05:19:04.166546 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.166822 kubelet[3187]: E0715 05:19:04.166813 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.166882 kubelet[3187]: W0715 05:19:04.166864 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.166926 kubelet[3187]: E0715 05:19:04.166919 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.167191 kubelet[3187]: E0715 05:19:04.167090 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.167191 kubelet[3187]: W0715 05:19:04.167097 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.167191 kubelet[3187]: E0715 05:19:04.167106 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.167399 kubelet[3187]: E0715 05:19:04.167388 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.167434 kubelet[3187]: W0715 05:19:04.167401 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.167434 kubelet[3187]: E0715 05:19:04.167412 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.167527 kubelet[3187]: E0715 05:19:04.167519 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.167550 kubelet[3187]: W0715 05:19:04.167528 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.167550 kubelet[3187]: E0715 05:19:04.167535 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.167694 kubelet[3187]: E0715 05:19:04.167613 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.167694 kubelet[3187]: W0715 05:19:04.167618 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.167694 kubelet[3187]: E0715 05:19:04.167623 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.167785 kubelet[3187]: E0715 05:19:04.167725 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.167785 kubelet[3187]: W0715 05:19:04.167750 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.167785 kubelet[3187]: E0715 05:19:04.167758 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.168013 kubelet[3187]: E0715 05:19:04.168001 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.168013 kubelet[3187]: W0715 05:19:04.168011 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.168076 kubelet[3187]: E0715 05:19:04.168022 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.168144 kubelet[3187]: E0715 05:19:04.168133 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.168144 kubelet[3187]: W0715 05:19:04.168141 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.168208 kubelet[3187]: E0715 05:19:04.168152 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.168278 kubelet[3187]: E0715 05:19:04.168268 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.168278 kubelet[3187]: W0715 05:19:04.168276 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.168326 kubelet[3187]: E0715 05:19:04.168289 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.168382 kubelet[3187]: E0715 05:19:04.168373 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.168382 kubelet[3187]: W0715 05:19:04.168381 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.168427 kubelet[3187]: E0715 05:19:04.168389 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.168511 kubelet[3187]: E0715 05:19:04.168502 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.168511 kubelet[3187]: W0715 05:19:04.168509 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.168565 kubelet[3187]: E0715 05:19:04.168517 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.168856 kubelet[3187]: E0715 05:19:04.168819 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.168922 kubelet[3187]: W0715 05:19:04.168888 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.168922 kubelet[3187]: E0715 05:19:04.168905 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.169043 kubelet[3187]: E0715 05:19:04.169034 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.169043 kubelet[3187]: W0715 05:19:04.169042 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.169105 kubelet[3187]: E0715 05:19:04.169051 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.169156 kubelet[3187]: E0715 05:19:04.169146 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.169156 kubelet[3187]: W0715 05:19:04.169154 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.169201 kubelet[3187]: E0715 05:19:04.169168 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.169609 kubelet[3187]: E0715 05:19:04.169590 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.169609 kubelet[3187]: W0715 05:19:04.169609 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.169686 kubelet[3187]: E0715 05:19:04.169621 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.169797 kubelet[3187]: E0715 05:19:04.169766 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.169797 kubelet[3187]: W0715 05:19:04.169773 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.169797 kubelet[3187]: E0715 05:19:04.169780 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.170032 kubelet[3187]: E0715 05:19:04.170023 3187 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:19:04.170058 kubelet[3187]: W0715 05:19:04.170033 3187 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:19:04.170058 kubelet[3187]: E0715 05:19:04.170040 3187 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:19:04.411357 containerd[1750]: time="2025-07-15T05:19:04.411323664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:04.414164 containerd[1750]: time="2025-07-15T05:19:04.414139452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 15 05:19:04.416984 containerd[1750]: time="2025-07-15T05:19:04.416951356Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:04.420044 containerd[1750]: time="2025-07-15T05:19:04.420005481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:04.420430 containerd[1750]: time="2025-07-15T05:19:04.420343644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.734419987s" Jul 15 05:19:04.420430 containerd[1750]: time="2025-07-15T05:19:04.420369957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 05:19:04.422188 containerd[1750]: time="2025-07-15T05:19:04.422086197Z" level=info msg="CreateContainer within sandbox \"019734fcfbc89764f2c0262706ed5aa8379fe8d8d63b4f24ece9c44fd5bd9570\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 05:19:04.435138 containerd[1750]: time="2025-07-15T05:19:04.435113552Z" level=info msg="Container e77198260460be0907ac329505eb95bd232586a1dd045e5a70a0d60440b92237: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:04.452518 containerd[1750]: time="2025-07-15T05:19:04.452493647Z" level=info msg="CreateContainer within sandbox \"019734fcfbc89764f2c0262706ed5aa8379fe8d8d63b4f24ece9c44fd5bd9570\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e77198260460be0907ac329505eb95bd232586a1dd045e5a70a0d60440b92237\"" Jul 15 05:19:04.452869 containerd[1750]: time="2025-07-15T05:19:04.452814351Z" level=info msg="StartContainer for \"e77198260460be0907ac329505eb95bd232586a1dd045e5a70a0d60440b92237\"" Jul 15 05:19:04.454099 containerd[1750]: time="2025-07-15T05:19:04.454074494Z" level=info msg="connecting to shim e77198260460be0907ac329505eb95bd232586a1dd045e5a70a0d60440b92237" address="unix:///run/containerd/s/9517a8dd6b9f0d03840d0b2736e8e4b3fe68807a2df2ec68368a4cb16ce34380" protocol=ttrpc version=3 Jul 15 05:19:04.473897 systemd[1]: Started cri-containerd-e77198260460be0907ac329505eb95bd232586a1dd045e5a70a0d60440b92237.scope - libcontainer container e77198260460be0907ac329505eb95bd232586a1dd045e5a70a0d60440b92237. Jul 15 05:19:04.501617 containerd[1750]: time="2025-07-15T05:19:04.501584272Z" level=info msg="StartContainer for \"e77198260460be0907ac329505eb95bd232586a1dd045e5a70a0d60440b92237\" returns successfully" Jul 15 05:19:04.504051 systemd[1]: cri-containerd-e77198260460be0907ac329505eb95bd232586a1dd045e5a70a0d60440b92237.scope: Deactivated successfully. Jul 15 05:19:04.507209 containerd[1750]: time="2025-07-15T05:19:04.507187054Z" level=info msg="received exit event container_id:\"e77198260460be0907ac329505eb95bd232586a1dd045e5a70a0d60440b92237\" id:\"e77198260460be0907ac329505eb95bd232586a1dd045e5a70a0d60440b92237\" pid:3882 exited_at:{seconds:1752556744 nanos:506924463}" Jul 15 05:19:04.507321 containerd[1750]: time="2025-07-15T05:19:04.507296859Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e77198260460be0907ac329505eb95bd232586a1dd045e5a70a0d60440b92237\" id:\"e77198260460be0907ac329505eb95bd232586a1dd045e5a70a0d60440b92237\" pid:3882 exited_at:{seconds:1752556744 nanos:506924463}" Jul 15 05:19:04.521968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e77198260460be0907ac329505eb95bd232586a1dd045e5a70a0d60440b92237-rootfs.mount: Deactivated successfully. Jul 15 05:19:05.126591 kubelet[3187]: I0715 05:19:05.125939 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6ff8c7db4d-ngqt4" podStartSLOduration=3.895811046 podStartE2EDuration="6.125926438s" podCreationTimestamp="2025-07-15 05:18:59 +0000 UTC" firstStartedPulling="2025-07-15 05:19:00.455695311 +0000 UTC m=+16.490043566" lastFinishedPulling="2025-07-15 05:19:02.685810699 +0000 UTC m=+18.720158958" observedRunningTime="2025-07-15 05:19:03.156449994 +0000 UTC m=+19.190798250" watchObservedRunningTime="2025-07-15 05:19:05.125926438 +0000 UTC m=+21.160274685" Jul 15 05:19:06.036760 kubelet[3187]: E0715 05:19:06.036236 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ltwxk" podUID="41438ff4-6284-45a5-8adf-d0024b23fdfa" Jul 15 05:19:08.036809 kubelet[3187]: E0715 05:19:08.036770 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ltwxk" podUID="41438ff4-6284-45a5-8adf-d0024b23fdfa" Jul 15 05:19:08.115218 containerd[1750]: time="2025-07-15T05:19:08.115118732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 05:19:10.036549 kubelet[3187]: E0715 05:19:10.035753 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ltwxk" podUID="41438ff4-6284-45a5-8adf-d0024b23fdfa" Jul 15 05:19:11.665801 containerd[1750]: time="2025-07-15T05:19:11.665765011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:11.668774 containerd[1750]: time="2025-07-15T05:19:11.668753027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 15 05:19:11.671723 containerd[1750]: time="2025-07-15T05:19:11.671685051Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:11.675162 containerd[1750]: time="2025-07-15T05:19:11.675115153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:11.675609 containerd[1750]: time="2025-07-15T05:19:11.675443085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.560242968s" Jul 15 05:19:11.675609 containerd[1750]: time="2025-07-15T05:19:11.675466201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 05:19:11.677248 containerd[1750]: time="2025-07-15T05:19:11.677225885Z" level=info msg="CreateContainer within sandbox \"019734fcfbc89764f2c0262706ed5aa8379fe8d8d63b4f24ece9c44fd5bd9570\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 05:19:11.699599 containerd[1750]: time="2025-07-15T05:19:11.696404575Z" level=info msg="Container b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:11.715102 containerd[1750]: time="2025-07-15T05:19:11.715074340Z" level=info msg="CreateContainer within sandbox \"019734fcfbc89764f2c0262706ed5aa8379fe8d8d63b4f24ece9c44fd5bd9570\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43\"" Jul 15 05:19:11.715448 containerd[1750]: time="2025-07-15T05:19:11.715428592Z" level=info msg="StartContainer for \"b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43\"" Jul 15 05:19:11.716728 containerd[1750]: time="2025-07-15T05:19:11.716704633Z" level=info msg="connecting to shim b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43" address="unix:///run/containerd/s/9517a8dd6b9f0d03840d0b2736e8e4b3fe68807a2df2ec68368a4cb16ce34380" protocol=ttrpc version=3 Jul 15 05:19:11.733896 systemd[1]: Started cri-containerd-b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43.scope - libcontainer container b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43. Jul 15 05:19:11.764521 containerd[1750]: time="2025-07-15T05:19:11.763201118Z" level=info msg="StartContainer for \"b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43\" returns successfully" Jul 15 05:19:12.035872 kubelet[3187]: E0715 05:19:12.035285 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ltwxk" podUID="41438ff4-6284-45a5-8adf-d0024b23fdfa" Jul 15 05:19:12.874551 containerd[1750]: time="2025-07-15T05:19:12.874518246Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:19:12.876657 systemd[1]: cri-containerd-b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43.scope: Deactivated successfully. Jul 15 05:19:12.877473 containerd[1750]: time="2025-07-15T05:19:12.876865405Z" level=info msg="received exit event container_id:\"b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43\" id:\"b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43\" pid:3938 exited_at:{seconds:1752556752 nanos:876453083}" Jul 15 05:19:12.877473 containerd[1750]: time="2025-07-15T05:19:12.877220054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43\" id:\"b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43\" pid:3938 exited_at:{seconds:1752556752 nanos:876453083}" Jul 15 05:19:12.877145 systemd[1]: cri-containerd-b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43.scope: Consumed 353ms CPU time, 190.9M memory peak, 171.2M written to disk. Jul 15 05:19:12.893479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1e16a1583d6dd73383232f183df67c309536721f5d459acb6d449eed8848a43-rootfs.mount: Deactivated successfully. Jul 15 05:19:12.936999 kubelet[3187]: I0715 05:19:12.936939 3187 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 05:19:12.974382 systemd[1]: Created slice kubepods-besteffort-pod6392a809_77cb_40b5_b244_24150cdc0d7c.slice - libcontainer container kubepods-besteffort-pod6392a809_77cb_40b5_b244_24150cdc0d7c.slice. Jul 15 05:19:12.982774 kubelet[3187]: W0715 05:19:12.982546 3187 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4396.0.0-n-11ebebb5c9" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4396.0.0-n-11ebebb5c9' and this object Jul 15 05:19:12.982853 kubelet[3187]: E0715 05:19:12.982795 3187 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4396.0.0-n-11ebebb5c9\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4396.0.0-n-11ebebb5c9' and this object" logger="UnhandledError" Jul 15 05:19:12.983290 systemd[1]: Created slice kubepods-burstable-pode0cc5731_9f6d_4735_b4c1_b38ed9dd4f80.slice - libcontainer container kubepods-burstable-pode0cc5731_9f6d_4735_b4c1_b38ed9dd4f80.slice. Jul 15 05:19:12.991369 systemd[1]: Created slice kubepods-burstable-pod22d9ae3c_d1ef_4c96_ac9d_55d1d4d8dd3a.slice - libcontainer container kubepods-burstable-pod22d9ae3c_d1ef_4c96_ac9d_55d1d4d8dd3a.slice. Jul 15 05:19:12.999016 systemd[1]: Created slice kubepods-besteffort-podbf63247a_25fe_4de6_99b2_376deebadade.slice - libcontainer container kubepods-besteffort-podbf63247a_25fe_4de6_99b2_376deebadade.slice. Jul 15 05:19:13.009416 systemd[1]: Created slice kubepods-besteffort-podba3b18bd_4c70_4270_863a_d5c2ea99eb0e.slice - libcontainer container kubepods-besteffort-podba3b18bd_4c70_4270_863a_d5c2ea99eb0e.slice. Jul 15 05:19:13.020860 systemd[1]: Created slice kubepods-besteffort-pod8a09d62e_b3b7_4bf4_9aaf_7bfb1de45f42.slice - libcontainer container kubepods-besteffort-pod8a09d62e_b3b7_4bf4_9aaf_7bfb1de45f42.slice. Jul 15 05:19:13.023928 kubelet[3187]: I0715 05:19:13.023886 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w86gp\" (UniqueName: \"kubernetes.io/projected/22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a-kube-api-access-w86gp\") pod \"coredns-668d6bf9bc-2g72x\" (UID: \"22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a\") " pod="kube-system/coredns-668d6bf9bc-2g72x" Jul 15 05:19:13.023928 kubelet[3187]: I0715 05:19:13.023922 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba3b18bd-4c70-4270-863a-d5c2ea99eb0e-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-5t4hj\" (UID: \"ba3b18bd-4c70-4270-863a-d5c2ea99eb0e\") " pod="calico-system/goldmane-768f4c5c69-5t4hj" Jul 15 05:19:13.024027 kubelet[3187]: I0715 05:19:13.023938 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ba3b18bd-4c70-4270-863a-d5c2ea99eb0e-goldmane-key-pair\") pod \"goldmane-768f4c5c69-5t4hj\" (UID: \"ba3b18bd-4c70-4270-863a-d5c2ea99eb0e\") " pod="calico-system/goldmane-768f4c5c69-5t4hj" Jul 15 05:19:13.024027 kubelet[3187]: I0715 05:19:13.023954 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0f99e969-c2e9-4e49-b60c-5065c4a1c565-calico-apiserver-certs\") pod \"calico-apiserver-7b745599d4-6rf49\" (UID: \"0f99e969-c2e9-4e49-b60c-5065c4a1c565\") " pod="calico-apiserver/calico-apiserver-7b745599d4-6rf49" Jul 15 05:19:13.024027 kubelet[3187]: I0715 05:19:13.023968 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7m6s\" (UniqueName: \"kubernetes.io/projected/ba3b18bd-4c70-4270-863a-d5c2ea99eb0e-kube-api-access-l7m6s\") pod \"goldmane-768f4c5c69-5t4hj\" (UID: \"ba3b18bd-4c70-4270-863a-d5c2ea99eb0e\") " pod="calico-system/goldmane-768f4c5c69-5t4hj" Jul 15 05:19:13.024027 kubelet[3187]: I0715 05:19:13.023983 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6392a809-77cb-40b5-b244-24150cdc0d7c-whisker-backend-key-pair\") pod \"whisker-745d87d456-tcrhv\" (UID: \"6392a809-77cb-40b5-b244-24150cdc0d7c\") " pod="calico-system/whisker-745d87d456-tcrhv" Jul 15 05:19:13.024027 kubelet[3187]: I0715 05:19:13.023998 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6392a809-77cb-40b5-b244-24150cdc0d7c-whisker-ca-bundle\") pod \"whisker-745d87d456-tcrhv\" (UID: \"6392a809-77cb-40b5-b244-24150cdc0d7c\") " pod="calico-system/whisker-745d87d456-tcrhv" Jul 15 05:19:13.024130 kubelet[3187]: I0715 05:19:13.024017 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xklgr\" (UniqueName: \"kubernetes.io/projected/0f99e969-c2e9-4e49-b60c-5065c4a1c565-kube-api-access-xklgr\") pod \"calico-apiserver-7b745599d4-6rf49\" (UID: \"0f99e969-c2e9-4e49-b60c-5065c4a1c565\") " pod="calico-apiserver/calico-apiserver-7b745599d4-6rf49" Jul 15 05:19:13.024130 kubelet[3187]: I0715 05:19:13.024032 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf63247a-25fe-4de6-99b2-376deebadade-tigera-ca-bundle\") pod \"calico-kube-controllers-6cbc978cdc-b78vb\" (UID: \"bf63247a-25fe-4de6-99b2-376deebadade\") " pod="calico-system/calico-kube-controllers-6cbc978cdc-b78vb" Jul 15 05:19:13.024130 kubelet[3187]: I0715 05:19:13.024052 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80-config-volume\") pod \"coredns-668d6bf9bc-rncjn\" (UID: \"e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80\") " pod="kube-system/coredns-668d6bf9bc-rncjn" Jul 15 05:19:13.024130 kubelet[3187]: I0715 05:19:13.024068 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmgwc\" (UniqueName: \"kubernetes.io/projected/e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80-kube-api-access-tmgwc\") pod \"coredns-668d6bf9bc-rncjn\" (UID: \"e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80\") " pod="kube-system/coredns-668d6bf9bc-rncjn" Jul 15 05:19:13.024130 kubelet[3187]: I0715 05:19:13.024087 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42-calico-apiserver-certs\") pod \"calico-apiserver-7b745599d4-xljsz\" (UID: \"8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42\") " pod="calico-apiserver/calico-apiserver-7b745599d4-xljsz" Jul 15 05:19:13.024235 kubelet[3187]: I0715 05:19:13.024104 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh9hl\" (UniqueName: \"kubernetes.io/projected/8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42-kube-api-access-bh9hl\") pod \"calico-apiserver-7b745599d4-xljsz\" (UID: \"8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42\") " pod="calico-apiserver/calico-apiserver-7b745599d4-xljsz" Jul 15 05:19:13.024235 kubelet[3187]: I0715 05:19:13.024120 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ktwp\" (UniqueName: \"kubernetes.io/projected/6392a809-77cb-40b5-b244-24150cdc0d7c-kube-api-access-7ktwp\") pod \"whisker-745d87d456-tcrhv\" (UID: \"6392a809-77cb-40b5-b244-24150cdc0d7c\") " pod="calico-system/whisker-745d87d456-tcrhv" Jul 15 05:19:13.024235 kubelet[3187]: I0715 05:19:13.024136 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khbh6\" (UniqueName: \"kubernetes.io/projected/bf63247a-25fe-4de6-99b2-376deebadade-kube-api-access-khbh6\") pod \"calico-kube-controllers-6cbc978cdc-b78vb\" (UID: \"bf63247a-25fe-4de6-99b2-376deebadade\") " pod="calico-system/calico-kube-controllers-6cbc978cdc-b78vb" Jul 15 05:19:13.024235 kubelet[3187]: I0715 05:19:13.024151 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a-config-volume\") pod \"coredns-668d6bf9bc-2g72x\" (UID: \"22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a\") " pod="kube-system/coredns-668d6bf9bc-2g72x" Jul 15 05:19:13.024235 kubelet[3187]: I0715 05:19:13.024167 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba3b18bd-4c70-4270-863a-d5c2ea99eb0e-config\") pod \"goldmane-768f4c5c69-5t4hj\" (UID: \"ba3b18bd-4c70-4270-863a-d5c2ea99eb0e\") " pod="calico-system/goldmane-768f4c5c69-5t4hj" Jul 15 05:19:13.027196 systemd[1]: Created slice kubepods-besteffort-pod0f99e969_c2e9_4e49_b60c_5065c4a1c565.slice - libcontainer container kubepods-besteffort-pod0f99e969_c2e9_4e49_b60c_5065c4a1c565.slice. Jul 15 05:19:13.319631 containerd[1750]: time="2025-07-15T05:19:13.319602433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-5t4hj,Uid:ba3b18bd-4c70-4270-863a-d5c2ea99eb0e,Namespace:calico-system,Attempt:0,}" Jul 15 05:19:13.324208 containerd[1750]: time="2025-07-15T05:19:13.324183790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b745599d4-xljsz,Uid:8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:19:13.329884 containerd[1750]: time="2025-07-15T05:19:13.329861518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b745599d4-6rf49,Uid:0f99e969-c2e9-4e49-b60c-5065c4a1c565,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:19:13.583960 containerd[1750]: time="2025-07-15T05:19:13.583885606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745d87d456-tcrhv,Uid:6392a809-77cb-40b5-b244-24150cdc0d7c,Namespace:calico-system,Attempt:0,}" Jul 15 05:19:13.607038 containerd[1750]: time="2025-07-15T05:19:13.606998666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cbc978cdc-b78vb,Uid:bf63247a-25fe-4de6-99b2-376deebadade,Namespace:calico-system,Attempt:0,}" Jul 15 05:19:13.833892 containerd[1750]: time="2025-07-15T05:19:13.833813721Z" level=error msg="Failed to destroy network for sandbox \"6d4742997050f9e971c2b37540519151af2f8341342e2d425b8a6c11f8654586\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.837564 containerd[1750]: time="2025-07-15T05:19:13.836800871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-5t4hj,Uid:ba3b18bd-4c70-4270-863a-d5c2ea99eb0e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d4742997050f9e971c2b37540519151af2f8341342e2d425b8a6c11f8654586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.837664 kubelet[3187]: E0715 05:19:13.836975 3187 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d4742997050f9e971c2b37540519151af2f8341342e2d425b8a6c11f8654586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.837664 kubelet[3187]: E0715 05:19:13.837040 3187 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d4742997050f9e971c2b37540519151af2f8341342e2d425b8a6c11f8654586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-5t4hj" Jul 15 05:19:13.837664 kubelet[3187]: E0715 05:19:13.837060 3187 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d4742997050f9e971c2b37540519151af2f8341342e2d425b8a6c11f8654586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-5t4hj" Jul 15 05:19:13.838425 kubelet[3187]: E0715 05:19:13.837092 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-5t4hj_calico-system(ba3b18bd-4c70-4270-863a-d5c2ea99eb0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-5t4hj_calico-system(ba3b18bd-4c70-4270-863a-d5c2ea99eb0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d4742997050f9e971c2b37540519151af2f8341342e2d425b8a6c11f8654586\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-5t4hj" podUID="ba3b18bd-4c70-4270-863a-d5c2ea99eb0e" Jul 15 05:19:13.850596 containerd[1750]: time="2025-07-15T05:19:13.850364127Z" level=error msg="Failed to destroy network for sandbox \"176eb6d420263adb84b9aec8d8cbb80ee819f538fd55bc5dbefe5d7aca87c904\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.854713 containerd[1750]: time="2025-07-15T05:19:13.854268177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b745599d4-xljsz,Uid:8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"176eb6d420263adb84b9aec8d8cbb80ee819f538fd55bc5dbefe5d7aca87c904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.854897 kubelet[3187]: E0715 05:19:13.854407 3187 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176eb6d420263adb84b9aec8d8cbb80ee819f538fd55bc5dbefe5d7aca87c904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.854897 kubelet[3187]: E0715 05:19:13.854447 3187 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176eb6d420263adb84b9aec8d8cbb80ee819f538fd55bc5dbefe5d7aca87c904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b745599d4-xljsz" Jul 15 05:19:13.854897 kubelet[3187]: E0715 05:19:13.854471 3187 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176eb6d420263adb84b9aec8d8cbb80ee819f538fd55bc5dbefe5d7aca87c904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b745599d4-xljsz" Jul 15 05:19:13.854982 kubelet[3187]: E0715 05:19:13.854502 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b745599d4-xljsz_calico-apiserver(8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b745599d4-xljsz_calico-apiserver(8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"176eb6d420263adb84b9aec8d8cbb80ee819f538fd55bc5dbefe5d7aca87c904\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b745599d4-xljsz" podUID="8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42" Jul 15 05:19:13.895219 containerd[1750]: time="2025-07-15T05:19:13.892592839Z" level=error msg="Failed to destroy network for sandbox \"67ace9d9c6cabe9b08de0478d3e7084f8d6fd460a94761e0fcc0cc48f6b418a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.895219 containerd[1750]: time="2025-07-15T05:19:13.895159758Z" level=error msg="Failed to destroy network for sandbox \"2789c932931746ca383aa9699680b82a7af323293671908638016bffde557b4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.895540 containerd[1750]: time="2025-07-15T05:19:13.895482641Z" level=error msg="Failed to destroy network for sandbox \"b3884f9df32912e2f92026b82bee8818720bf4102fcdf31923bc37d4b12f9e3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.898869 containerd[1750]: time="2025-07-15T05:19:13.898835522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cbc978cdc-b78vb,Uid:bf63247a-25fe-4de6-99b2-376deebadade,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3884f9df32912e2f92026b82bee8818720bf4102fcdf31923bc37d4b12f9e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.900832 kubelet[3187]: E0715 05:19:13.899073 3187 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3884f9df32912e2f92026b82bee8818720bf4102fcdf31923bc37d4b12f9e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.900832 kubelet[3187]: E0715 05:19:13.899111 3187 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3884f9df32912e2f92026b82bee8818720bf4102fcdf31923bc37d4b12f9e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cbc978cdc-b78vb" Jul 15 05:19:13.900832 kubelet[3187]: E0715 05:19:13.899130 3187 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3884f9df32912e2f92026b82bee8818720bf4102fcdf31923bc37d4b12f9e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cbc978cdc-b78vb" Jul 15 05:19:13.900954 kubelet[3187]: E0715 05:19:13.899161 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cbc978cdc-b78vb_calico-system(bf63247a-25fe-4de6-99b2-376deebadade)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cbc978cdc-b78vb_calico-system(bf63247a-25fe-4de6-99b2-376deebadade)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3884f9df32912e2f92026b82bee8818720bf4102fcdf31923bc37d4b12f9e3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cbc978cdc-b78vb" podUID="bf63247a-25fe-4de6-99b2-376deebadade" Jul 15 05:19:13.904050 containerd[1750]: time="2025-07-15T05:19:13.901274308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b745599d4-6rf49,Uid:0f99e969-c2e9-4e49-b60c-5065c4a1c565,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67ace9d9c6cabe9b08de0478d3e7084f8d6fd460a94761e0fcc0cc48f6b418a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.904050 containerd[1750]: time="2025-07-15T05:19:13.903459810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745d87d456-tcrhv,Uid:6392a809-77cb-40b5-b244-24150cdc0d7c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2789c932931746ca383aa9699680b82a7af323293671908638016bffde557b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.904173 kubelet[3187]: E0715 05:19:13.901531 3187 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67ace9d9c6cabe9b08de0478d3e7084f8d6fd460a94761e0fcc0cc48f6b418a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.904173 kubelet[3187]: E0715 05:19:13.901567 3187 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67ace9d9c6cabe9b08de0478d3e7084f8d6fd460a94761e0fcc0cc48f6b418a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b745599d4-6rf49" Jul 15 05:19:13.904173 kubelet[3187]: E0715 05:19:13.901583 3187 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67ace9d9c6cabe9b08de0478d3e7084f8d6fd460a94761e0fcc0cc48f6b418a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b745599d4-6rf49" Jul 15 05:19:13.904173 kubelet[3187]: E0715 05:19:13.903744 3187 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2789c932931746ca383aa9699680b82a7af323293671908638016bffde557b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:13.904122 systemd[1]: run-netns-cni\x2d4e747ad4\x2d82e7\x2d59ca\x2d5cd5\x2d70d3b034916e.mount: Deactivated successfully. Jul 15 05:19:13.904433 kubelet[3187]: E0715 05:19:13.903778 3187 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2789c932931746ca383aa9699680b82a7af323293671908638016bffde557b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-745d87d456-tcrhv" Jul 15 05:19:13.904433 kubelet[3187]: E0715 05:19:13.903796 3187 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2789c932931746ca383aa9699680b82a7af323293671908638016bffde557b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-745d87d456-tcrhv" Jul 15 05:19:13.904433 kubelet[3187]: E0715 05:19:13.903825 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-745d87d456-tcrhv_calico-system(6392a809-77cb-40b5-b244-24150cdc0d7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-745d87d456-tcrhv_calico-system(6392a809-77cb-40b5-b244-24150cdc0d7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2789c932931746ca383aa9699680b82a7af323293671908638016bffde557b4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-745d87d456-tcrhv" podUID="6392a809-77cb-40b5-b244-24150cdc0d7c" Jul 15 05:19:13.904215 systemd[1]: run-netns-cni\x2d87a5b474\x2d0d20\x2d3dba\x2d2a9a\x2d709c96934cf9.mount: Deactivated successfully. Jul 15 05:19:13.904274 systemd[1]: run-netns-cni\x2da635ff48\x2d98b4\x2d0a4f\x2daf8f\x2d43c700a7e077.mount: Deactivated successfully. Jul 15 05:19:13.905350 kubelet[3187]: E0715 05:19:13.901611 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b745599d4-6rf49_calico-apiserver(0f99e969-c2e9-4e49-b60c-5065c4a1c565)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b745599d4-6rf49_calico-apiserver(0f99e969-c2e9-4e49-b60c-5065c4a1c565)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67ace9d9c6cabe9b08de0478d3e7084f8d6fd460a94761e0fcc0cc48f6b418a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b745599d4-6rf49" podUID="0f99e969-c2e9-4e49-b60c-5065c4a1c565" Jul 15 05:19:14.040782 systemd[1]: Created slice kubepods-besteffort-pod41438ff4_6284_45a5_8adf_d0024b23fdfa.slice - libcontainer container kubepods-besteffort-pod41438ff4_6284_45a5_8adf_d0024b23fdfa.slice. Jul 15 05:19:14.042681 containerd[1750]: time="2025-07-15T05:19:14.042658704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ltwxk,Uid:41438ff4-6284-45a5-8adf-d0024b23fdfa,Namespace:calico-system,Attempt:0,}" Jul 15 05:19:14.082846 containerd[1750]: time="2025-07-15T05:19:14.082822317Z" level=error msg="Failed to destroy network for sandbox \"a6e12b3a72fb77f53a42245bacea08375db394d4baa522da51f898ae70b465fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:14.084686 systemd[1]: run-netns-cni\x2d2d0962d2\x2d319f\x2db5ae\x2d8f60\x2da05ee1f6e947.mount: Deactivated successfully. Jul 15 05:19:14.087202 containerd[1750]: time="2025-07-15T05:19:14.087169992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ltwxk,Uid:41438ff4-6284-45a5-8adf-d0024b23fdfa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e12b3a72fb77f53a42245bacea08375db394d4baa522da51f898ae70b465fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:14.087335 kubelet[3187]: E0715 05:19:14.087317 3187 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e12b3a72fb77f53a42245bacea08375db394d4baa522da51f898ae70b465fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:14.087377 kubelet[3187]: E0715 05:19:14.087359 3187 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e12b3a72fb77f53a42245bacea08375db394d4baa522da51f898ae70b465fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ltwxk" Jul 15 05:19:14.087400 kubelet[3187]: E0715 05:19:14.087375 3187 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e12b3a72fb77f53a42245bacea08375db394d4baa522da51f898ae70b465fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ltwxk" Jul 15 05:19:14.087429 kubelet[3187]: E0715 05:19:14.087405 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ltwxk_calico-system(41438ff4-6284-45a5-8adf-d0024b23fdfa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ltwxk_calico-system(41438ff4-6284-45a5-8adf-d0024b23fdfa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6e12b3a72fb77f53a42245bacea08375db394d4baa522da51f898ae70b465fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ltwxk" podUID="41438ff4-6284-45a5-8adf-d0024b23fdfa" Jul 15 05:19:14.127154 containerd[1750]: time="2025-07-15T05:19:14.127058123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 05:19:14.188513 containerd[1750]: time="2025-07-15T05:19:14.188479790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rncjn,Uid:e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80,Namespace:kube-system,Attempt:0,}" Jul 15 05:19:14.196166 containerd[1750]: time="2025-07-15T05:19:14.196146751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2g72x,Uid:22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a,Namespace:kube-system,Attempt:0,}" Jul 15 05:19:14.237102 containerd[1750]: time="2025-07-15T05:19:14.237073692Z" level=error msg="Failed to destroy network for sandbox \"f8773bb33d586cb8469726ad6313c21bf9641496a72ed97a778053c999797aac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:14.239595 containerd[1750]: time="2025-07-15T05:19:14.239562545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rncjn,Uid:e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8773bb33d586cb8469726ad6313c21bf9641496a72ed97a778053c999797aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:14.240251 kubelet[3187]: E0715 05:19:14.239830 3187 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8773bb33d586cb8469726ad6313c21bf9641496a72ed97a778053c999797aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:14.240251 kubelet[3187]: E0715 05:19:14.239864 3187 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8773bb33d586cb8469726ad6313c21bf9641496a72ed97a778053c999797aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rncjn" Jul 15 05:19:14.240251 kubelet[3187]: E0715 05:19:14.239883 3187 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8773bb33d586cb8469726ad6313c21bf9641496a72ed97a778053c999797aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rncjn" Jul 15 05:19:14.240474 kubelet[3187]: E0715 05:19:14.239914 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rncjn_kube-system(e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rncjn_kube-system(e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8773bb33d586cb8469726ad6313c21bf9641496a72ed97a778053c999797aac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rncjn" podUID="e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80" Jul 15 05:19:14.241071 containerd[1750]: time="2025-07-15T05:19:14.241032356Z" level=error msg="Failed to destroy network for sandbox \"32dfcda7a76aaf9a88e31d91d508b3b226695580a92bda5135ad9488683753cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:14.243498 containerd[1750]: time="2025-07-15T05:19:14.243470941Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2g72x,Uid:22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32dfcda7a76aaf9a88e31d91d508b3b226695580a92bda5135ad9488683753cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:14.243628 kubelet[3187]: E0715 05:19:14.243597 3187 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32dfcda7a76aaf9a88e31d91d508b3b226695580a92bda5135ad9488683753cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:19:14.243664 kubelet[3187]: E0715 05:19:14.243627 3187 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32dfcda7a76aaf9a88e31d91d508b3b226695580a92bda5135ad9488683753cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2g72x" Jul 15 05:19:14.243664 kubelet[3187]: E0715 05:19:14.243643 3187 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32dfcda7a76aaf9a88e31d91d508b3b226695580a92bda5135ad9488683753cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2g72x" Jul 15 05:19:14.243727 kubelet[3187]: E0715 05:19:14.243671 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2g72x_kube-system(22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2g72x_kube-system(22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32dfcda7a76aaf9a88e31d91d508b3b226695580a92bda5135ad9488683753cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2g72x" podUID="22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a" Jul 15 05:19:14.893235 systemd[1]: run-netns-cni\x2d512be441\x2d4a2a\x2d9d43\x2dbd8e\x2d73eb0b98e85f.mount: Deactivated successfully. Jul 15 05:19:20.583389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598660342.mount: Deactivated successfully. Jul 15 05:19:20.614197 containerd[1750]: time="2025-07-15T05:19:20.614161260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:20.616647 containerd[1750]: time="2025-07-15T05:19:20.616614981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 15 05:19:20.619228 containerd[1750]: time="2025-07-15T05:19:20.619190390Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:20.622316 containerd[1750]: time="2025-07-15T05:19:20.622282159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:20.622678 containerd[1750]: time="2025-07-15T05:19:20.622559636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.495432573s" Jul 15 05:19:20.622678 containerd[1750]: time="2025-07-15T05:19:20.622582908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 05:19:20.633168 containerd[1750]: time="2025-07-15T05:19:20.633140493Z" level=info msg="CreateContainer within sandbox \"019734fcfbc89764f2c0262706ed5aa8379fe8d8d63b4f24ece9c44fd5bd9570\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 05:19:20.653319 containerd[1750]: time="2025-07-15T05:19:20.652848734Z" level=info msg="Container e70f1cb251ed89aff761ec089f50b6ad0979c186c502d3e9ec1000ac6e13d780: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:20.669326 containerd[1750]: time="2025-07-15T05:19:20.669301994Z" level=info msg="CreateContainer within sandbox \"019734fcfbc89764f2c0262706ed5aa8379fe8d8d63b4f24ece9c44fd5bd9570\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e70f1cb251ed89aff761ec089f50b6ad0979c186c502d3e9ec1000ac6e13d780\"" Jul 15 05:19:20.669688 containerd[1750]: time="2025-07-15T05:19:20.669635640Z" level=info msg="StartContainer for \"e70f1cb251ed89aff761ec089f50b6ad0979c186c502d3e9ec1000ac6e13d780\"" Jul 15 05:19:20.670988 containerd[1750]: time="2025-07-15T05:19:20.670949346Z" level=info msg="connecting to shim e70f1cb251ed89aff761ec089f50b6ad0979c186c502d3e9ec1000ac6e13d780" address="unix:///run/containerd/s/9517a8dd6b9f0d03840d0b2736e8e4b3fe68807a2df2ec68368a4cb16ce34380" protocol=ttrpc version=3 Jul 15 05:19:20.688881 systemd[1]: Started cri-containerd-e70f1cb251ed89aff761ec089f50b6ad0979c186c502d3e9ec1000ac6e13d780.scope - libcontainer container e70f1cb251ed89aff761ec089f50b6ad0979c186c502d3e9ec1000ac6e13d780. Jul 15 05:19:20.720879 containerd[1750]: time="2025-07-15T05:19:20.720859881Z" level=info msg="StartContainer for \"e70f1cb251ed89aff761ec089f50b6ad0979c186c502d3e9ec1000ac6e13d780\" returns successfully" Jul 15 05:19:21.094797 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 05:19:21.094969 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 05:19:21.264625 kubelet[3187]: I0715 05:19:21.264571 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hzbpx" podStartSLOduration=1.544043729 podStartE2EDuration="21.264556443s" podCreationTimestamp="2025-07-15 05:19:00 +0000 UTC" firstStartedPulling="2025-07-15 05:19:00.902598405 +0000 UTC m=+16.936946656" lastFinishedPulling="2025-07-15 05:19:20.623111114 +0000 UTC m=+36.657459370" observedRunningTime="2025-07-15 05:19:21.183758764 +0000 UTC m=+37.218107016" watchObservedRunningTime="2025-07-15 05:19:21.264556443 +0000 UTC m=+37.298904776" Jul 15 05:19:21.336943 containerd[1750]: time="2025-07-15T05:19:21.336901191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e70f1cb251ed89aff761ec089f50b6ad0979c186c502d3e9ec1000ac6e13d780\" id:\"9ade63392b7a2fb0074ce29e040cc69a78c0428e07ab0ad9679de37150c8f9c9\" pid:4257 exit_status:1 exited_at:{seconds:1752556761 nanos:336646711}" Jul 15 05:19:21.368822 kubelet[3187]: I0715 05:19:21.368585 3187 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6392a809-77cb-40b5-b244-24150cdc0d7c-whisker-ca-bundle\") pod \"6392a809-77cb-40b5-b244-24150cdc0d7c\" (UID: \"6392a809-77cb-40b5-b244-24150cdc0d7c\") " Jul 15 05:19:21.368822 kubelet[3187]: I0715 05:19:21.368622 3187 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6392a809-77cb-40b5-b244-24150cdc0d7c-whisker-backend-key-pair\") pod \"6392a809-77cb-40b5-b244-24150cdc0d7c\" (UID: \"6392a809-77cb-40b5-b244-24150cdc0d7c\") " Jul 15 05:19:21.368822 kubelet[3187]: I0715 05:19:21.368640 3187 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ktwp\" (UniqueName: \"kubernetes.io/projected/6392a809-77cb-40b5-b244-24150cdc0d7c-kube-api-access-7ktwp\") pod \"6392a809-77cb-40b5-b244-24150cdc0d7c\" (UID: \"6392a809-77cb-40b5-b244-24150cdc0d7c\") " Jul 15 05:19:21.369148 kubelet[3187]: I0715 05:19:21.369129 3187 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6392a809-77cb-40b5-b244-24150cdc0d7c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6392a809-77cb-40b5-b244-24150cdc0d7c" (UID: "6392a809-77cb-40b5-b244-24150cdc0d7c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 05:19:21.372235 kubelet[3187]: I0715 05:19:21.372190 3187 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6392a809-77cb-40b5-b244-24150cdc0d7c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6392a809-77cb-40b5-b244-24150cdc0d7c" (UID: "6392a809-77cb-40b5-b244-24150cdc0d7c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 05:19:21.372235 kubelet[3187]: I0715 05:19:21.372190 3187 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6392a809-77cb-40b5-b244-24150cdc0d7c-kube-api-access-7ktwp" (OuterVolumeSpecName: "kube-api-access-7ktwp") pod "6392a809-77cb-40b5-b244-24150cdc0d7c" (UID: "6392a809-77cb-40b5-b244-24150cdc0d7c"). InnerVolumeSpecName "kube-api-access-7ktwp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 05:19:21.469606 kubelet[3187]: I0715 05:19:21.469572 3187 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6392a809-77cb-40b5-b244-24150cdc0d7c-whisker-ca-bundle\") on node \"ci-4396.0.0-n-11ebebb5c9\" DevicePath \"\"" Jul 15 05:19:21.469606 kubelet[3187]: I0715 05:19:21.469605 3187 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7ktwp\" (UniqueName: \"kubernetes.io/projected/6392a809-77cb-40b5-b244-24150cdc0d7c-kube-api-access-7ktwp\") on node \"ci-4396.0.0-n-11ebebb5c9\" DevicePath \"\"" Jul 15 05:19:21.469698 kubelet[3187]: I0715 05:19:21.469612 3187 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6392a809-77cb-40b5-b244-24150cdc0d7c-whisker-backend-key-pair\") on node \"ci-4396.0.0-n-11ebebb5c9\" DevicePath \"\"" Jul 15 05:19:21.583340 systemd[1]: var-lib-kubelet-pods-6392a809\x2d77cb\x2d40b5\x2db244\x2d24150cdc0d7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7ktwp.mount: Deactivated successfully. Jul 15 05:19:21.583428 systemd[1]: var-lib-kubelet-pods-6392a809\x2d77cb\x2d40b5\x2db244\x2d24150cdc0d7c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 05:19:22.040187 systemd[1]: Removed slice kubepods-besteffort-pod6392a809_77cb_40b5_b244_24150cdc0d7c.slice - libcontainer container kubepods-besteffort-pod6392a809_77cb_40b5_b244_24150cdc0d7c.slice. Jul 15 05:19:22.223672 systemd[1]: Created slice kubepods-besteffort-pod844fcf8d_96df_48ec_bcf5_c424ebc090fd.slice - libcontainer container kubepods-besteffort-pod844fcf8d_96df_48ec_bcf5_c424ebc090fd.slice. Jul 15 05:19:22.245284 containerd[1750]: time="2025-07-15T05:19:22.245257417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e70f1cb251ed89aff761ec089f50b6ad0979c186c502d3e9ec1000ac6e13d780\" id:\"be7fffc13de8c875eedd0b57a2df2815e82ebb37999990cf73fe6cbbc5f18482\" pid:4302 exit_status:1 exited_at:{seconds:1752556762 nanos:245044177}" Jul 15 05:19:22.275146 kubelet[3187]: I0715 05:19:22.275119 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/844fcf8d-96df-48ec-bcf5-c424ebc090fd-whisker-ca-bundle\") pod \"whisker-8567dbff44-zwlhc\" (UID: \"844fcf8d-96df-48ec-bcf5-c424ebc090fd\") " pod="calico-system/whisker-8567dbff44-zwlhc" Jul 15 05:19:22.275389 kubelet[3187]: I0715 05:19:22.275155 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/844fcf8d-96df-48ec-bcf5-c424ebc090fd-whisker-backend-key-pair\") pod \"whisker-8567dbff44-zwlhc\" (UID: \"844fcf8d-96df-48ec-bcf5-c424ebc090fd\") " pod="calico-system/whisker-8567dbff44-zwlhc" Jul 15 05:19:22.275389 kubelet[3187]: I0715 05:19:22.275175 3187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gx5x\" (UniqueName: \"kubernetes.io/projected/844fcf8d-96df-48ec-bcf5-c424ebc090fd-kube-api-access-9gx5x\") pod \"whisker-8567dbff44-zwlhc\" (UID: \"844fcf8d-96df-48ec-bcf5-c424ebc090fd\") " pod="calico-system/whisker-8567dbff44-zwlhc" Jul 15 05:19:22.530117 containerd[1750]: time="2025-07-15T05:19:22.530086385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8567dbff44-zwlhc,Uid:844fcf8d-96df-48ec-bcf5-c424ebc090fd,Namespace:calico-system,Attempt:0,}" Jul 15 05:19:22.702848 systemd-networkd[1374]: calib0662343a7b: Link UP Jul 15 05:19:22.703067 systemd-networkd[1374]: calib0662343a7b: Gained carrier Jul 15 05:19:22.715021 containerd[1750]: 2025-07-15 05:19:22.579 [INFO][4404] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:19:22.715021 containerd[1750]: 2025-07-15 05:19:22.592 [INFO][4404] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-eth0 whisker-8567dbff44- calico-system 844fcf8d-96df-48ec-bcf5-c424ebc090fd 901 0 2025-07-15 05:19:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8567dbff44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4396.0.0-n-11ebebb5c9 whisker-8567dbff44-zwlhc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib0662343a7b [] [] }} ContainerID="4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" Namespace="calico-system" Pod="whisker-8567dbff44-zwlhc" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-" Jul 15 05:19:22.715021 containerd[1750]: 2025-07-15 05:19:22.592 [INFO][4404] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" Namespace="calico-system" Pod="whisker-8567dbff44-zwlhc" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-eth0" Jul 15 05:19:22.715021 containerd[1750]: 2025-07-15 05:19:22.633 [INFO][4418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" HandleID="k8s-pod-network.4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-eth0" Jul 15 05:19:22.715228 containerd[1750]: 2025-07-15 05:19:22.633 [INFO][4418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" HandleID="k8s-pod-network.4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4396.0.0-n-11ebebb5c9", "pod":"whisker-8567dbff44-zwlhc", "timestamp":"2025-07-15 05:19:22.633690396 +0000 UTC"}, Hostname:"ci-4396.0.0-n-11ebebb5c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:19:22.715228 containerd[1750]: 2025-07-15 05:19:22.634 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:19:22.715228 containerd[1750]: 2025-07-15 05:19:22.634 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:19:22.715228 containerd[1750]: 2025-07-15 05:19:22.634 [INFO][4418] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-11ebebb5c9' Jul 15 05:19:22.715228 containerd[1750]: 2025-07-15 05:19:22.644 [INFO][4418] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:22.715228 containerd[1750]: 2025-07-15 05:19:22.649 [INFO][4418] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:22.715228 containerd[1750]: 2025-07-15 05:19:22.653 [INFO][4418] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:22.715228 containerd[1750]: 2025-07-15 05:19:22.655 [INFO][4418] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:22.715228 containerd[1750]: 2025-07-15 05:19:22.657 [INFO][4418] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:22.715426 containerd[1750]: 2025-07-15 05:19:22.657 [INFO][4418] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:22.715426 containerd[1750]: 2025-07-15 05:19:22.657 [INFO][4418] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59 Jul 15 05:19:22.715426 containerd[1750]: 2025-07-15 05:19:22.661 [INFO][4418] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:22.715426 containerd[1750]: 2025-07-15 05:19:22.666 [INFO][4418] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.129/26] block=192.168.15.128/26 handle="k8s-pod-network.4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:22.715426 containerd[1750]: 2025-07-15 05:19:22.666 [INFO][4418] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.129/26] handle="k8s-pod-network.4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:22.715426 containerd[1750]: 2025-07-15 05:19:22.666 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:19:22.715426 containerd[1750]: 2025-07-15 05:19:22.666 [INFO][4418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.129/26] IPv6=[] ContainerID="4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" HandleID="k8s-pod-network.4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-eth0" Jul 15 05:19:22.715548 containerd[1750]: 2025-07-15 05:19:22.668 [INFO][4404] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" Namespace="calico-system" Pod="whisker-8567dbff44-zwlhc" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-eth0", GenerateName:"whisker-8567dbff44-", Namespace:"calico-system", SelfLink:"", UID:"844fcf8d-96df-48ec-bcf5-c424ebc090fd", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 19, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8567dbff44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"", Pod:"whisker-8567dbff44-zwlhc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0662343a7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:22.715548 containerd[1750]: 2025-07-15 05:19:22.668 [INFO][4404] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.129/32] ContainerID="4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" Namespace="calico-system" Pod="whisker-8567dbff44-zwlhc" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-eth0" Jul 15 05:19:22.715631 containerd[1750]: 2025-07-15 05:19:22.668 [INFO][4404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0662343a7b ContainerID="4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" Namespace="calico-system" Pod="whisker-8567dbff44-zwlhc" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-eth0" Jul 15 05:19:22.715631 containerd[1750]: 2025-07-15 05:19:22.702 [INFO][4404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" Namespace="calico-system" Pod="whisker-8567dbff44-zwlhc" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-eth0" Jul 15 05:19:22.715680 containerd[1750]: 2025-07-15 05:19:22.702 [INFO][4404] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" Namespace="calico-system" Pod="whisker-8567dbff44-zwlhc" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-eth0", GenerateName:"whisker-8567dbff44-", Namespace:"calico-system", SelfLink:"", UID:"844fcf8d-96df-48ec-bcf5-c424ebc090fd", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 19, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8567dbff44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59", Pod:"whisker-8567dbff44-zwlhc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0662343a7b", MAC:"56:41:16:17:56:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:22.715763 containerd[1750]: 2025-07-15 05:19:22.713 [INFO][4404] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" Namespace="calico-system" Pod="whisker-8567dbff44-zwlhc" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-whisker--8567dbff44--zwlhc-eth0" Jul 15 05:19:22.753084 containerd[1750]: time="2025-07-15T05:19:22.753045532Z" level=info msg="connecting to shim 4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59" address="unix:///run/containerd/s/facf876df7221ff6411ab19b6dd4b022747c8f472c55cb45bacbc8cef9cd7439" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:19:22.772878 systemd[1]: Started cri-containerd-4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59.scope - libcontainer container 4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59. Jul 15 05:19:22.806798 containerd[1750]: time="2025-07-15T05:19:22.806712484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8567dbff44-zwlhc,Uid:844fcf8d-96df-48ec-bcf5-c424ebc090fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59\"" Jul 15 05:19:22.807917 containerd[1750]: time="2025-07-15T05:19:22.807900915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 05:19:24.032878 systemd-networkd[1374]: calib0662343a7b: Gained IPv6LL Jul 15 05:19:24.037108 kubelet[3187]: I0715 05:19:24.036987 3187 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6392a809-77cb-40b5-b244-24150cdc0d7c" path="/var/lib/kubelet/pods/6392a809-77cb-40b5-b244-24150cdc0d7c/volumes" Jul 15 05:19:24.188393 containerd[1750]: time="2025-07-15T05:19:24.188361331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:24.191510 containerd[1750]: time="2025-07-15T05:19:24.191477972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 15 05:19:24.194897 containerd[1750]: time="2025-07-15T05:19:24.194861416Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:24.197990 containerd[1750]: time="2025-07-15T05:19:24.197950072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:24.198424 containerd[1750]: time="2025-07-15T05:19:24.198340551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.390327433s" Jul 15 05:19:24.198424 containerd[1750]: time="2025-07-15T05:19:24.198365657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 05:19:24.200009 containerd[1750]: time="2025-07-15T05:19:24.199809439Z" level=info msg="CreateContainer within sandbox \"4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 05:19:24.235817 containerd[1750]: time="2025-07-15T05:19:24.235795072Z" level=info msg="Container b6a3d0ec4b15f0d97ef4e04610a286daec9618fd506bab6cd030a3bca788c183: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:24.254017 containerd[1750]: time="2025-07-15T05:19:24.253994412Z" level=info msg="CreateContainer within sandbox \"4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b6a3d0ec4b15f0d97ef4e04610a286daec9618fd506bab6cd030a3bca788c183\"" Jul 15 05:19:24.254325 containerd[1750]: time="2025-07-15T05:19:24.254309074Z" level=info msg="StartContainer for \"b6a3d0ec4b15f0d97ef4e04610a286daec9618fd506bab6cd030a3bca788c183\"" Jul 15 05:19:24.255232 containerd[1750]: time="2025-07-15T05:19:24.255191478Z" level=info msg="connecting to shim b6a3d0ec4b15f0d97ef4e04610a286daec9618fd506bab6cd030a3bca788c183" address="unix:///run/containerd/s/facf876df7221ff6411ab19b6dd4b022747c8f472c55cb45bacbc8cef9cd7439" protocol=ttrpc version=3 Jul 15 05:19:24.272867 systemd[1]: Started cri-containerd-b6a3d0ec4b15f0d97ef4e04610a286daec9618fd506bab6cd030a3bca788c183.scope - libcontainer container b6a3d0ec4b15f0d97ef4e04610a286daec9618fd506bab6cd030a3bca788c183. Jul 15 05:19:24.312040 containerd[1750]: time="2025-07-15T05:19:24.311969061Z" level=info msg="StartContainer for \"b6a3d0ec4b15f0d97ef4e04610a286daec9618fd506bab6cd030a3bca788c183\" returns successfully" Jul 15 05:19:24.313881 containerd[1750]: time="2025-07-15T05:19:24.313494124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 05:19:25.036037 containerd[1750]: time="2025-07-15T05:19:25.035892721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b745599d4-xljsz,Uid:8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:19:25.036037 containerd[1750]: time="2025-07-15T05:19:25.036032328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2g72x,Uid:22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a,Namespace:kube-system,Attempt:0,}" Jul 15 05:19:25.036318 containerd[1750]: time="2025-07-15T05:19:25.036298824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-5t4hj,Uid:ba3b18bd-4c70-4270-863a-d5c2ea99eb0e,Namespace:calico-system,Attempt:0,}" Jul 15 05:19:25.176458 systemd-networkd[1374]: calib8760b164dc: Link UP Jul 15 05:19:25.176611 systemd-networkd[1374]: calib8760b164dc: Gained carrier Jul 15 05:19:25.188148 containerd[1750]: 2025-07-15 05:19:25.087 [INFO][4560] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:19:25.188148 containerd[1750]: 2025-07-15 05:19:25.101 [INFO][4560] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-eth0 calico-apiserver-7b745599d4- calico-apiserver 8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42 832 0 2025-07-15 05:18:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b745599d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4396.0.0-n-11ebebb5c9 calico-apiserver-7b745599d4-xljsz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib8760b164dc [] [] }} ContainerID="bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-xljsz" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-" Jul 15 05:19:25.188148 containerd[1750]: 2025-07-15 05:19:25.101 [INFO][4560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-xljsz" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-eth0" Jul 15 05:19:25.188148 containerd[1750]: 2025-07-15 05:19:25.135 [INFO][4595] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" HandleID="k8s-pod-network.bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-eth0" Jul 15 05:19:25.188425 containerd[1750]: 2025-07-15 05:19:25.135 [INFO][4595] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" HandleID="k8s-pod-network.bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4396.0.0-n-11ebebb5c9", "pod":"calico-apiserver-7b745599d4-xljsz", "timestamp":"2025-07-15 05:19:25.135149961 +0000 UTC"}, Hostname:"ci-4396.0.0-n-11ebebb5c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:19:25.188425 containerd[1750]: 2025-07-15 05:19:25.135 [INFO][4595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:19:25.188425 containerd[1750]: 2025-07-15 05:19:25.135 [INFO][4595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:19:25.188425 containerd[1750]: 2025-07-15 05:19:25.135 [INFO][4595] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-11ebebb5c9' Jul 15 05:19:25.188425 containerd[1750]: 2025-07-15 05:19:25.144 [INFO][4595] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.188425 containerd[1750]: 2025-07-15 05:19:25.151 [INFO][4595] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.188425 containerd[1750]: 2025-07-15 05:19:25.155 [INFO][4595] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.188425 containerd[1750]: 2025-07-15 05:19:25.157 [INFO][4595] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.188425 containerd[1750]: 2025-07-15 05:19:25.158 [INFO][4595] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.188961 containerd[1750]: 2025-07-15 05:19:25.158 [INFO][4595] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.188961 containerd[1750]: 2025-07-15 05:19:25.159 [INFO][4595] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac Jul 15 05:19:25.188961 containerd[1750]: 2025-07-15 05:19:25.163 [INFO][4595] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.188961 containerd[1750]: 2025-07-15 05:19:25.170 [INFO][4595] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.130/26] block=192.168.15.128/26 handle="k8s-pod-network.bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.188961 containerd[1750]: 2025-07-15 05:19:25.170 [INFO][4595] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.130/26] handle="k8s-pod-network.bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.188961 containerd[1750]: 2025-07-15 05:19:25.170 [INFO][4595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:19:25.188961 containerd[1750]: 2025-07-15 05:19:25.170 [INFO][4595] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.130/26] IPv6=[] ContainerID="bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" HandleID="k8s-pod-network.bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-eth0" Jul 15 05:19:25.189112 containerd[1750]: 2025-07-15 05:19:25.172 [INFO][4560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-xljsz" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-eth0", GenerateName:"calico-apiserver-7b745599d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 18, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b745599d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"", Pod:"calico-apiserver-7b745599d4-xljsz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib8760b164dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:25.189179 containerd[1750]: 2025-07-15 05:19:25.172 [INFO][4560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.130/32] ContainerID="bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-xljsz" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-eth0" Jul 15 05:19:25.189179 containerd[1750]: 2025-07-15 05:19:25.172 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8760b164dc ContainerID="bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-xljsz" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-eth0" Jul 15 05:19:25.189179 containerd[1750]: 2025-07-15 05:19:25.176 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-xljsz" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-eth0" Jul 15 05:19:25.189245 containerd[1750]: 2025-07-15 05:19:25.177 [INFO][4560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-xljsz" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-eth0", GenerateName:"calico-apiserver-7b745599d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 18, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b745599d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac", Pod:"calico-apiserver-7b745599d4-xljsz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib8760b164dc", MAC:"62:2d:e3:49:17:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:25.189294 containerd[1750]: 2025-07-15 05:19:25.186 [INFO][4560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-xljsz" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--xljsz-eth0" Jul 15 05:19:25.222905 containerd[1750]: time="2025-07-15T05:19:25.222872516Z" level=info msg="connecting to shim bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac" address="unix:///run/containerd/s/db715c0654dd560b2c2cf9253f0f6b5d9e0ce9554d0139caf7fdbabd79a15ef2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:19:25.239849 systemd[1]: Started cri-containerd-bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac.scope - libcontainer container bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac. Jul 15 05:19:25.283677 systemd-networkd[1374]: calif3e031fc675: Link UP Jul 15 05:19:25.283865 systemd-networkd[1374]: calif3e031fc675: Gained carrier Jul 15 05:19:25.288512 containerd[1750]: time="2025-07-15T05:19:25.288194664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b745599d4-xljsz,Uid:8a09d62e-b3b7-4bf4-9aaf-7bfb1de45f42,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac\"" Jul 15 05:19:25.299759 containerd[1750]: 2025-07-15 05:19:25.105 [INFO][4570] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:19:25.299759 containerd[1750]: 2025-07-15 05:19:25.117 [INFO][4570] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-eth0 coredns-668d6bf9bc- kube-system 22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a 829 0 2025-07-15 05:18:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4396.0.0-n-11ebebb5c9 coredns-668d6bf9bc-2g72x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif3e031fc675 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2g72x" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-" Jul 15 05:19:25.299759 containerd[1750]: 2025-07-15 05:19:25.117 [INFO][4570] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2g72x" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-eth0" Jul 15 05:19:25.299759 containerd[1750]: 2025-07-15 05:19:25.156 [INFO][4603] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" HandleID="k8s-pod-network.4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-eth0" Jul 15 05:19:25.299948 containerd[1750]: 2025-07-15 05:19:25.156 [INFO][4603] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" HandleID="k8s-pod-network.4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5860), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4396.0.0-n-11ebebb5c9", "pod":"coredns-668d6bf9bc-2g72x", "timestamp":"2025-07-15 05:19:25.156639595 +0000 UTC"}, Hostname:"ci-4396.0.0-n-11ebebb5c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:19:25.299948 containerd[1750]: 2025-07-15 05:19:25.156 [INFO][4603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:19:25.299948 containerd[1750]: 2025-07-15 05:19:25.170 [INFO][4603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:19:25.299948 containerd[1750]: 2025-07-15 05:19:25.170 [INFO][4603] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-11ebebb5c9' Jul 15 05:19:25.299948 containerd[1750]: 2025-07-15 05:19:25.244 [INFO][4603] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.299948 containerd[1750]: 2025-07-15 05:19:25.251 [INFO][4603] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.299948 containerd[1750]: 2025-07-15 05:19:25.255 [INFO][4603] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.299948 containerd[1750]: 2025-07-15 05:19:25.257 [INFO][4603] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.299948 containerd[1750]: 2025-07-15 05:19:25.259 [INFO][4603] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.300171 containerd[1750]: 2025-07-15 05:19:25.261 [INFO][4603] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.300171 containerd[1750]: 2025-07-15 05:19:25.262 [INFO][4603] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e Jul 15 05:19:25.300171 containerd[1750]: 2025-07-15 05:19:25.269 [INFO][4603] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.300171 containerd[1750]: 2025-07-15 05:19:25.276 [INFO][4603] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.131/26] block=192.168.15.128/26 handle="k8s-pod-network.4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.300171 containerd[1750]: 2025-07-15 05:19:25.276 [INFO][4603] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.131/26] handle="k8s-pod-network.4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.300171 containerd[1750]: 2025-07-15 05:19:25.276 [INFO][4603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:19:25.300171 containerd[1750]: 2025-07-15 05:19:25.276 [INFO][4603] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.131/26] IPv6=[] ContainerID="4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" HandleID="k8s-pod-network.4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-eth0" Jul 15 05:19:25.300306 containerd[1750]: 2025-07-15 05:19:25.280 [INFO][4570] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2g72x" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"", Pod:"coredns-668d6bf9bc-2g72x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3e031fc675", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:25.300306 containerd[1750]: 2025-07-15 05:19:25.280 [INFO][4570] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.131/32] ContainerID="4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2g72x" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-eth0" Jul 15 05:19:25.300306 containerd[1750]: 2025-07-15 05:19:25.280 [INFO][4570] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3e031fc675 ContainerID="4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2g72x" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-eth0" Jul 15 05:19:25.300306 containerd[1750]: 2025-07-15 05:19:25.283 [INFO][4570] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2g72x" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-eth0" Jul 15 05:19:25.300306 containerd[1750]: 2025-07-15 05:19:25.284 [INFO][4570] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2g72x" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e", Pod:"coredns-668d6bf9bc-2g72x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3e031fc675", MAC:"ee:4d:55:43:36:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:25.300306 containerd[1750]: 2025-07-15 05:19:25.297 [INFO][4570] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2g72x" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--2g72x-eth0" Jul 15 05:19:25.359471 containerd[1750]: time="2025-07-15T05:19:25.359418928Z" level=info msg="connecting to shim 4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e" address="unix:///run/containerd/s/2b1b884f7a6ea31b44fd035e12cf9ff0ca8ebdb4a57b749db9bd47d1240e6c39" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:19:25.382981 systemd[1]: Started cri-containerd-4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e.scope - libcontainer container 4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e. Jul 15 05:19:25.384220 systemd-networkd[1374]: cali3ee8a1590d6: Link UP Jul 15 05:19:25.384333 systemd-networkd[1374]: cali3ee8a1590d6: Gained carrier Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.110 [INFO][4582] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.124 [INFO][4582] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-eth0 goldmane-768f4c5c69- calico-system ba3b18bd-4c70-4270-863a-d5c2ea99eb0e 831 0 2025-07-15 05:18:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4396.0.0-n-11ebebb5c9 goldmane-768f4c5c69-5t4hj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3ee8a1590d6 [] [] }} ContainerID="a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-5t4hj" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-" Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.124 [INFO][4582] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-5t4hj" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-eth0" Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.158 [INFO][4608] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" HandleID="k8s-pod-network.a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-eth0" Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.158 [INFO][4608] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" HandleID="k8s-pod-network.a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4396.0.0-n-11ebebb5c9", "pod":"goldmane-768f4c5c69-5t4hj", "timestamp":"2025-07-15 05:19:25.158370671 +0000 UTC"}, Hostname:"ci-4396.0.0-n-11ebebb5c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.158 [INFO][4608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.276 [INFO][4608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.276 [INFO][4608] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-11ebebb5c9' Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.343 [INFO][4608] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.353 [INFO][4608] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.358 [INFO][4608] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.360 [INFO][4608] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.363 [INFO][4608] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.363 [INFO][4608] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.368 [INFO][4608] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7 Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.371 [INFO][4608] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.379 [INFO][4608] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.132/26] block=192.168.15.128/26 handle="k8s-pod-network.a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.379 [INFO][4608] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.132/26] handle="k8s-pod-network.a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.379 [INFO][4608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:19:25.406531 containerd[1750]: 2025-07-15 05:19:25.379 [INFO][4608] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.132/26] IPv6=[] ContainerID="a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" HandleID="k8s-pod-network.a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-eth0" Jul 15 05:19:25.407067 containerd[1750]: 2025-07-15 05:19:25.381 [INFO][4582] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-5t4hj" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"ba3b18bd-4c70-4270-863a-d5c2ea99eb0e", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 18, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"", Pod:"goldmane-768f4c5c69-5t4hj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3ee8a1590d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:25.407067 containerd[1750]: 2025-07-15 05:19:25.381 [INFO][4582] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.132/32] ContainerID="a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-5t4hj" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-eth0" Jul 15 05:19:25.407067 containerd[1750]: 2025-07-15 05:19:25.381 [INFO][4582] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ee8a1590d6 ContainerID="a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-5t4hj" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-eth0" Jul 15 05:19:25.407067 containerd[1750]: 2025-07-15 05:19:25.384 [INFO][4582] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-5t4hj" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-eth0" Jul 15 05:19:25.407067 containerd[1750]: 2025-07-15 05:19:25.385 [INFO][4582] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-5t4hj" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"ba3b18bd-4c70-4270-863a-d5c2ea99eb0e", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 18, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7", Pod:"goldmane-768f4c5c69-5t4hj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3ee8a1590d6", MAC:"86:63:5b:42:e9:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:25.407067 containerd[1750]: 2025-07-15 05:19:25.404 [INFO][4582] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-5t4hj" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-goldmane--768f4c5c69--5t4hj-eth0" Jul 15 05:19:25.440940 containerd[1750]: time="2025-07-15T05:19:25.440918652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2g72x,Uid:22d9ae3c-d1ef-4c96-ac9d-55d1d4d8dd3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e\"" Jul 15 05:19:25.442905 containerd[1750]: time="2025-07-15T05:19:25.442877459Z" level=info msg="CreateContainer within sandbox \"4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:19:25.480928 containerd[1750]: time="2025-07-15T05:19:25.480322876Z" level=info msg="connecting to shim a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7" address="unix:///run/containerd/s/c10a33f8e4fc05e441b4ad5adcb1bdfba42d80eb411a207a35ecf2bb823fab77" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:19:25.486589 containerd[1750]: time="2025-07-15T05:19:25.486564440Z" level=info msg="Container 33c3ebf0a2764d5846a8347392c1d9779328edf077a7dd42e37e713bc3f2008a: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:25.502880 systemd[1]: Started cri-containerd-a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7.scope - libcontainer container a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7. Jul 15 05:19:25.511319 containerd[1750]: time="2025-07-15T05:19:25.511282360Z" level=info msg="CreateContainer within sandbox \"4d15c7922a95c5827de47948da002f7ac0304d6079d90d6205922c792e70a32e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"33c3ebf0a2764d5846a8347392c1d9779328edf077a7dd42e37e713bc3f2008a\"" Jul 15 05:19:25.512270 containerd[1750]: time="2025-07-15T05:19:25.512162311Z" level=info msg="StartContainer for \"33c3ebf0a2764d5846a8347392c1d9779328edf077a7dd42e37e713bc3f2008a\"" Jul 15 05:19:25.513727 containerd[1750]: time="2025-07-15T05:19:25.513705983Z" level=info msg="connecting to shim 33c3ebf0a2764d5846a8347392c1d9779328edf077a7dd42e37e713bc3f2008a" address="unix:///run/containerd/s/2b1b884f7a6ea31b44fd035e12cf9ff0ca8ebdb4a57b749db9bd47d1240e6c39" protocol=ttrpc version=3 Jul 15 05:19:25.529847 systemd[1]: Started cri-containerd-33c3ebf0a2764d5846a8347392c1d9779328edf077a7dd42e37e713bc3f2008a.scope - libcontainer container 33c3ebf0a2764d5846a8347392c1d9779328edf077a7dd42e37e713bc3f2008a. Jul 15 05:19:25.547920 containerd[1750]: time="2025-07-15T05:19:25.547103118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-5t4hj,Uid:ba3b18bd-4c70-4270-863a-d5c2ea99eb0e,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7\"" Jul 15 05:19:25.560284 containerd[1750]: time="2025-07-15T05:19:25.560250289Z" level=info msg="StartContainer for \"33c3ebf0a2764d5846a8347392c1d9779328edf077a7dd42e37e713bc3f2008a\" returns successfully" Jul 15 05:19:26.036279 containerd[1750]: time="2025-07-15T05:19:26.035898480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rncjn,Uid:e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80,Namespace:kube-system,Attempt:0,}" Jul 15 05:19:26.036279 containerd[1750]: time="2025-07-15T05:19:26.036159739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b745599d4-6rf49,Uid:0f99e969-c2e9-4e49-b60c-5065c4a1c565,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:19:26.152177 systemd-networkd[1374]: calic53553a8e4b: Link UP Jul 15 05:19:26.152291 systemd-networkd[1374]: calic53553a8e4b: Gained carrier Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.081 [INFO][4826] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.094 [INFO][4826] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-eth0 coredns-668d6bf9bc- kube-system e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80 834 0 2025-07-15 05:18:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4396.0.0-n-11ebebb5c9 coredns-668d6bf9bc-rncjn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic53553a8e4b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-rncjn" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-" Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.094 [INFO][4826] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-rncjn" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-eth0" Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.118 [INFO][4851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" HandleID="k8s-pod-network.28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-eth0" Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.119 [INFO][4851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" HandleID="k8s-pod-network.28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000258ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4396.0.0-n-11ebebb5c9", "pod":"coredns-668d6bf9bc-rncjn", "timestamp":"2025-07-15 05:19:26.118954462 +0000 UTC"}, Hostname:"ci-4396.0.0-n-11ebebb5c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.119 [INFO][4851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.119 [INFO][4851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.119 [INFO][4851] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-11ebebb5c9' Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.126 [INFO][4851] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.129 [INFO][4851] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.131 [INFO][4851] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.133 [INFO][4851] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.134 [INFO][4851] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.134 [INFO][4851] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.135 [INFO][4851] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4 Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.138 [INFO][4851] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.146 [INFO][4851] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.133/26] block=192.168.15.128/26 handle="k8s-pod-network.28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.146 [INFO][4851] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.133/26] handle="k8s-pod-network.28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.146 [INFO][4851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:19:26.167000 containerd[1750]: 2025-07-15 05:19:26.146 [INFO][4851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.133/26] IPv6=[] ContainerID="28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" HandleID="k8s-pod-network.28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-eth0" Jul 15 05:19:26.167492 containerd[1750]: 2025-07-15 05:19:26.148 [INFO][4826] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-rncjn" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"", Pod:"coredns-668d6bf9bc-rncjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic53553a8e4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:26.167492 containerd[1750]: 2025-07-15 05:19:26.149 [INFO][4826] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.133/32] ContainerID="28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-rncjn" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-eth0" Jul 15 05:19:26.167492 containerd[1750]: 2025-07-15 05:19:26.149 [INFO][4826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic53553a8e4b ContainerID="28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-rncjn" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-eth0" Jul 15 05:19:26.167492 containerd[1750]: 2025-07-15 05:19:26.152 [INFO][4826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-rncjn" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-eth0" Jul 15 05:19:26.167492 containerd[1750]: 2025-07-15 05:19:26.152 [INFO][4826] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-rncjn" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4", Pod:"coredns-668d6bf9bc-rncjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic53553a8e4b", MAC:"8a:4d:f1:07:a1:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:26.167492 containerd[1750]: 2025-07-15 05:19:26.165 [INFO][4826] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-rncjn" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-coredns--668d6bf9bc--rncjn-eth0" Jul 15 05:19:26.176234 kubelet[3187]: I0715 05:19:26.176055 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2g72x" podStartSLOduration=37.176038559 podStartE2EDuration="37.176038559s" podCreationTimestamp="2025-07-15 05:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:19:26.175681572 +0000 UTC m=+42.210029826" watchObservedRunningTime="2025-07-15 05:19:26.176038559 +0000 UTC m=+42.210386876" Jul 15 05:19:26.215107 containerd[1750]: time="2025-07-15T05:19:26.215061486Z" level=info msg="connecting to shim 28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4" address="unix:///run/containerd/s/795b0cf0e253058bdc9185162f0c8a08580c7f11f4644a12bd12cd2f23fd0504" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:19:26.246047 systemd[1]: Started cri-containerd-28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4.scope - libcontainer container 28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4. Jul 15 05:19:26.255175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010131440.mount: Deactivated successfully. Jul 15 05:19:26.307229 containerd[1750]: time="2025-07-15T05:19:26.306569881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rncjn,Uid:e0cc5731-9f6d-4735-b4c1-b38ed9dd4f80,Namespace:kube-system,Attempt:0,} returns sandbox id \"28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4\"" Jul 15 05:19:26.312750 containerd[1750]: time="2025-07-15T05:19:26.312557715Z" level=info msg="CreateContainer within sandbox \"28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:19:26.338443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3838909361.mount: Deactivated successfully. Jul 15 05:19:26.339723 containerd[1750]: time="2025-07-15T05:19:26.339692234Z" level=info msg="Container 762255004006fac8dde129d79ba8c9b2b3506914dddae95c0d3a1c7935031bf9: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:26.346380 systemd-networkd[1374]: calia14daf82fe6: Link UP Jul 15 05:19:26.346490 systemd-networkd[1374]: calia14daf82fe6: Gained carrier Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.085 [INFO][4837] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.095 [INFO][4837] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-eth0 calico-apiserver-7b745599d4- calico-apiserver 0f99e969-c2e9-4e49-b60c-5065c4a1c565 833 0 2025-07-15 05:18:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b745599d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4396.0.0-n-11ebebb5c9 calico-apiserver-7b745599d4-6rf49 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia14daf82fe6 [] [] }} ContainerID="af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-6rf49" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-" Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.095 [INFO][4837] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-6rf49" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-eth0" Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.120 [INFO][4857] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" HandleID="k8s-pod-network.af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-eth0" Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.120 [INFO][4857] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" HandleID="k8s-pod-network.af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4396.0.0-n-11ebebb5c9", "pod":"calico-apiserver-7b745599d4-6rf49", "timestamp":"2025-07-15 05:19:26.120681777 +0000 UTC"}, Hostname:"ci-4396.0.0-n-11ebebb5c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.120 [INFO][4857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.146 [INFO][4857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.146 [INFO][4857] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-11ebebb5c9' Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.298 [INFO][4857] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.305 [INFO][4857] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.313 [INFO][4857] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.319 [INFO][4857] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.322 [INFO][4857] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.322 [INFO][4857] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.325 [INFO][4857] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0 Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.329 [INFO][4857] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.341 [INFO][4857] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.134/26] block=192.168.15.128/26 handle="k8s-pod-network.af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.341 [INFO][4857] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.134/26] handle="k8s-pod-network.af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.342 [INFO][4857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:19:26.359360 containerd[1750]: 2025-07-15 05:19:26.342 [INFO][4857] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.134/26] IPv6=[] ContainerID="af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" HandleID="k8s-pod-network.af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-eth0" Jul 15 05:19:26.359856 containerd[1750]: 2025-07-15 05:19:26.344 [INFO][4837] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-6rf49" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-eth0", GenerateName:"calico-apiserver-7b745599d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f99e969-c2e9-4e49-b60c-5065c4a1c565", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 18, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b745599d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"", Pod:"calico-apiserver-7b745599d4-6rf49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia14daf82fe6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:26.359856 containerd[1750]: 2025-07-15 05:19:26.344 [INFO][4837] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.134/32] ContainerID="af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-6rf49" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-eth0" Jul 15 05:19:26.359856 containerd[1750]: 2025-07-15 05:19:26.344 [INFO][4837] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia14daf82fe6 ContainerID="af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-6rf49" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-eth0" Jul 15 05:19:26.359856 containerd[1750]: 2025-07-15 05:19:26.346 [INFO][4837] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-6rf49" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-eth0" Jul 15 05:19:26.359856 containerd[1750]: 2025-07-15 05:19:26.346 [INFO][4837] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-6rf49" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-eth0", GenerateName:"calico-apiserver-7b745599d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f99e969-c2e9-4e49-b60c-5065c4a1c565", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 18, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b745599d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0", Pod:"calico-apiserver-7b745599d4-6rf49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia14daf82fe6", MAC:"32:b3:a7:6f:69:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:26.359856 containerd[1750]: 2025-07-15 05:19:26.357 [INFO][4837] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" Namespace="calico-apiserver" Pod="calico-apiserver-7b745599d4-6rf49" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--apiserver--7b745599d4--6rf49-eth0" Jul 15 05:19:26.365051 containerd[1750]: time="2025-07-15T05:19:26.365003978Z" level=info msg="CreateContainer within sandbox \"28ebeccfa3e36e5f580490b9b32000328ea4250526914733341c7075220972f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"762255004006fac8dde129d79ba8c9b2b3506914dddae95c0d3a1c7935031bf9\"" Jul 15 05:19:26.366647 containerd[1750]: time="2025-07-15T05:19:26.366335945Z" level=info msg="StartContainer for \"762255004006fac8dde129d79ba8c9b2b3506914dddae95c0d3a1c7935031bf9\"" Jul 15 05:19:26.367252 containerd[1750]: time="2025-07-15T05:19:26.367224400Z" level=info msg="connecting to shim 762255004006fac8dde129d79ba8c9b2b3506914dddae95c0d3a1c7935031bf9" address="unix:///run/containerd/s/795b0cf0e253058bdc9185162f0c8a08580c7f11f4644a12bd12cd2f23fd0504" protocol=ttrpc version=3 Jul 15 05:19:26.387855 systemd[1]: Started cri-containerd-762255004006fac8dde129d79ba8c9b2b3506914dddae95c0d3a1c7935031bf9.scope - libcontainer container 762255004006fac8dde129d79ba8c9b2b3506914dddae95c0d3a1c7935031bf9. Jul 15 05:19:26.406773 containerd[1750]: time="2025-07-15T05:19:26.406712155Z" level=info msg="connecting to shim af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0" address="unix:///run/containerd/s/05140cec7cc71d7f99f4011df33d4338dd67e7871ddf254fd33cdce69aa312c4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:19:26.437879 systemd[1]: Started cri-containerd-af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0.scope - libcontainer container af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0. Jul 15 05:19:26.448496 containerd[1750]: time="2025-07-15T05:19:26.448430050Z" level=info msg="StartContainer for \"762255004006fac8dde129d79ba8c9b2b3506914dddae95c0d3a1c7935031bf9\" returns successfully" Jul 15 05:19:26.518623 containerd[1750]: time="2025-07-15T05:19:26.518600847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b745599d4-6rf49,Uid:0f99e969-c2e9-4e49-b60c-5065c4a1c565,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0\"" Jul 15 05:19:26.733203 containerd[1750]: time="2025-07-15T05:19:26.733172106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:26.735775 containerd[1750]: time="2025-07-15T05:19:26.735753611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 15 05:19:26.738489 containerd[1750]: time="2025-07-15T05:19:26.738454206Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:26.741946 containerd[1750]: time="2025-07-15T05:19:26.741902642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:26.742321 containerd[1750]: time="2025-07-15T05:19:26.742236548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.428716846s" Jul 15 05:19:26.742321 containerd[1750]: time="2025-07-15T05:19:26.742260903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 05:19:26.743583 containerd[1750]: time="2025-07-15T05:19:26.743563364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:19:26.744400 containerd[1750]: time="2025-07-15T05:19:26.744361716Z" level=info msg="CreateContainer within sandbox \"4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 05:19:26.765313 containerd[1750]: time="2025-07-15T05:19:26.765281735Z" level=info msg="Container 9830b6553057154cb52167b0c7bbe6e52f8dfebfc31e7c4e09fb8bc50a00222b: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:26.784045 containerd[1750]: time="2025-07-15T05:19:26.784016580Z" level=info msg="CreateContainer within sandbox \"4a7709b606e1c9a32631d1482b28df86c76ff29b6f54092bb0fca5593d8afd59\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9830b6553057154cb52167b0c7bbe6e52f8dfebfc31e7c4e09fb8bc50a00222b\"" Jul 15 05:19:26.787747 containerd[1750]: time="2025-07-15T05:19:26.787586118Z" level=info msg="StartContainer for \"9830b6553057154cb52167b0c7bbe6e52f8dfebfc31e7c4e09fb8bc50a00222b\"" Jul 15 05:19:26.789513 containerd[1750]: time="2025-07-15T05:19:26.789485339Z" level=info msg="connecting to shim 9830b6553057154cb52167b0c7bbe6e52f8dfebfc31e7c4e09fb8bc50a00222b" address="unix:///run/containerd/s/facf876df7221ff6411ab19b6dd4b022747c8f472c55cb45bacbc8cef9cd7439" protocol=ttrpc version=3 Jul 15 05:19:26.805886 systemd[1]: Started cri-containerd-9830b6553057154cb52167b0c7bbe6e52f8dfebfc31e7c4e09fb8bc50a00222b.scope - libcontainer container 9830b6553057154cb52167b0c7bbe6e52f8dfebfc31e7c4e09fb8bc50a00222b. Jul 15 05:19:26.849157 systemd-networkd[1374]: calif3e031fc675: Gained IPv6LL Jul 15 05:19:26.852372 containerd[1750]: time="2025-07-15T05:19:26.852329899Z" level=info msg="StartContainer for \"9830b6553057154cb52167b0c7bbe6e52f8dfebfc31e7c4e09fb8bc50a00222b\" returns successfully" Jul 15 05:19:27.035501 containerd[1750]: time="2025-07-15T05:19:27.035334602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ltwxk,Uid:41438ff4-6284-45a5-8adf-d0024b23fdfa,Namespace:calico-system,Attempt:0,}" Jul 15 05:19:27.035501 containerd[1750]: time="2025-07-15T05:19:27.035430106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cbc978cdc-b78vb,Uid:bf63247a-25fe-4de6-99b2-376deebadade,Namespace:calico-system,Attempt:0,}" Jul 15 05:19:27.137533 systemd-networkd[1374]: cali9674f640093: Link UP Jul 15 05:19:27.138677 systemd-networkd[1374]: cali9674f640093: Gained carrier Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.069 [INFO][5068] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.077 [INFO][5068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-eth0 csi-node-driver- calico-system 41438ff4-6284-45a5-8adf-d0024b23fdfa 722 0 2025-07-15 05:19:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4396.0.0-n-11ebebb5c9 csi-node-driver-ltwxk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9674f640093 [] [] }} ContainerID="ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" Namespace="calico-system" Pod="csi-node-driver-ltwxk" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-" Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.077 [INFO][5068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" Namespace="calico-system" Pod="csi-node-driver-ltwxk" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-eth0" Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.106 [INFO][5092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" HandleID="k8s-pod-network.ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-eth0" Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.106 [INFO][5092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" HandleID="k8s-pod-network.ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4396.0.0-n-11ebebb5c9", "pod":"csi-node-driver-ltwxk", "timestamp":"2025-07-15 05:19:27.105998371 +0000 UTC"}, Hostname:"ci-4396.0.0-n-11ebebb5c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.106 [INFO][5092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.106 [INFO][5092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.106 [INFO][5092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-11ebebb5c9' Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.110 [INFO][5092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.113 [INFO][5092] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.116 [INFO][5092] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.117 [INFO][5092] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.119 [INFO][5092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.119 [INFO][5092] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.120 [INFO][5092] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.124 [INFO][5092] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.133 [INFO][5092] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.135/26] block=192.168.15.128/26 handle="k8s-pod-network.ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.133 [INFO][5092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.135/26] handle="k8s-pod-network.ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.133 [INFO][5092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:19:27.158393 containerd[1750]: 2025-07-15 05:19:27.133 [INFO][5092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.135/26] IPv6=[] ContainerID="ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" HandleID="k8s-pod-network.ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-eth0" Jul 15 05:19:27.158962 containerd[1750]: 2025-07-15 05:19:27.134 [INFO][5068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" Namespace="calico-system" Pod="csi-node-driver-ltwxk" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41438ff4-6284-45a5-8adf-d0024b23fdfa", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 19, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"", Pod:"csi-node-driver-ltwxk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9674f640093", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:27.158962 containerd[1750]: 2025-07-15 05:19:27.135 [INFO][5068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.135/32] ContainerID="ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" Namespace="calico-system" Pod="csi-node-driver-ltwxk" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-eth0" Jul 15 05:19:27.158962 containerd[1750]: 2025-07-15 05:19:27.135 [INFO][5068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9674f640093 ContainerID="ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" Namespace="calico-system" Pod="csi-node-driver-ltwxk" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-eth0" Jul 15 05:19:27.158962 containerd[1750]: 2025-07-15 05:19:27.139 [INFO][5068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" Namespace="calico-system" Pod="csi-node-driver-ltwxk" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-eth0" Jul 15 05:19:27.158962 containerd[1750]: 2025-07-15 05:19:27.140 [INFO][5068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" Namespace="calico-system" Pod="csi-node-driver-ltwxk" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41438ff4-6284-45a5-8adf-d0024b23fdfa", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 19, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb", Pod:"csi-node-driver-ltwxk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9674f640093", MAC:"fe:83:19:09:4b:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:27.158962 containerd[1750]: 2025-07-15 05:19:27.156 [INFO][5068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" Namespace="calico-system" Pod="csi-node-driver-ltwxk" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-csi--node--driver--ltwxk-eth0" Jul 15 05:19:27.168871 systemd-networkd[1374]: calib8760b164dc: Gained IPv6LL Jul 15 05:19:27.178375 kubelet[3187]: I0715 05:19:27.178278 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rncjn" podStartSLOduration=38.178262188 podStartE2EDuration="38.178262188s" podCreationTimestamp="2025-07-15 05:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:19:27.177959205 +0000 UTC m=+43.212307461" watchObservedRunningTime="2025-07-15 05:19:27.178262188 +0000 UTC m=+43.212610489" Jul 15 05:19:27.195183 containerd[1750]: time="2025-07-15T05:19:27.195085100Z" level=info msg="connecting to shim ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb" address="unix:///run/containerd/s/32fef484e37c10dacf890ffed429ea97f811220a536fcf8c47975b3cffed7147" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:19:27.220028 systemd[1]: Started cri-containerd-ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb.scope - libcontainer container ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb. Jul 15 05:19:27.232846 systemd-networkd[1374]: cali3ee8a1590d6: Gained IPv6LL Jul 15 05:19:27.253770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount40189216.mount: Deactivated successfully. Jul 15 05:19:27.260378 containerd[1750]: time="2025-07-15T05:19:27.260354109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ltwxk,Uid:41438ff4-6284-45a5-8adf-d0024b23fdfa,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb\"" Jul 15 05:19:27.276328 systemd-networkd[1374]: cali4b307db4964: Link UP Jul 15 05:19:27.276813 systemd-networkd[1374]: cali4b307db4964: Gained carrier Jul 15 05:19:27.287516 kubelet[3187]: I0715 05:19:27.287427 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8567dbff44-zwlhc" podStartSLOduration=1.351968522 podStartE2EDuration="5.28731878s" podCreationTimestamp="2025-07-15 05:19:22 +0000 UTC" firstStartedPulling="2025-07-15 05:19:22.807688516 +0000 UTC m=+38.842036776" lastFinishedPulling="2025-07-15 05:19:26.743038782 +0000 UTC m=+42.777387034" observedRunningTime="2025-07-15 05:19:27.223211282 +0000 UTC m=+43.257559541" watchObservedRunningTime="2025-07-15 05:19:27.28731878 +0000 UTC m=+43.321667030" Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.074 [INFO][5079] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.081 [INFO][5079] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-eth0 calico-kube-controllers-6cbc978cdc- calico-system bf63247a-25fe-4de6-99b2-376deebadade 830 0 2025-07-15 05:19:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cbc978cdc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4396.0.0-n-11ebebb5c9 calico-kube-controllers-6cbc978cdc-b78vb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4b307db4964 [] [] }} ContainerID="0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" Namespace="calico-system" Pod="calico-kube-controllers-6cbc978cdc-b78vb" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-" Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.082 [INFO][5079] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" Namespace="calico-system" Pod="calico-kube-controllers-6cbc978cdc-b78vb" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-eth0" Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.107 [INFO][5098] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" HandleID="k8s-pod-network.0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-eth0" Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.107 [INFO][5098] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" HandleID="k8s-pod-network.0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4396.0.0-n-11ebebb5c9", "pod":"calico-kube-controllers-6cbc978cdc-b78vb", "timestamp":"2025-07-15 05:19:27.107705824 +0000 UTC"}, Hostname:"ci-4396.0.0-n-11ebebb5c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.108 [INFO][5098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.133 [INFO][5098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.133 [INFO][5098] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-11ebebb5c9' Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.217 [INFO][5098] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.236 [INFO][5098] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.253 [INFO][5098] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.256 [INFO][5098] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.262 [INFO][5098] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.262 [INFO][5098] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.264 [INFO][5098] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3 Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.267 [INFO][5098] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.274 [INFO][5098] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.136/26] block=192.168.15.128/26 handle="k8s-pod-network.0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.274 [INFO][5098] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.136/26] handle="k8s-pod-network.0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" host="ci-4396.0.0-n-11ebebb5c9" Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.274 [INFO][5098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:19:27.288792 containerd[1750]: 2025-07-15 05:19:27.274 [INFO][5098] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.136/26] IPv6=[] ContainerID="0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" HandleID="k8s-pod-network.0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" Workload="ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-eth0" Jul 15 05:19:27.289998 containerd[1750]: 2025-07-15 05:19:27.275 [INFO][5079] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" Namespace="calico-system" Pod="calico-kube-controllers-6cbc978cdc-b78vb" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-eth0", GenerateName:"calico-kube-controllers-6cbc978cdc-", Namespace:"calico-system", SelfLink:"", UID:"bf63247a-25fe-4de6-99b2-376deebadade", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 19, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cbc978cdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"", Pod:"calico-kube-controllers-6cbc978cdc-b78vb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b307db4964", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:27.289998 containerd[1750]: 2025-07-15 05:19:27.275 [INFO][5079] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.136/32] ContainerID="0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" Namespace="calico-system" Pod="calico-kube-controllers-6cbc978cdc-b78vb" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-eth0" Jul 15 05:19:27.289998 containerd[1750]: 2025-07-15 05:19:27.275 [INFO][5079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b307db4964 ContainerID="0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" Namespace="calico-system" Pod="calico-kube-controllers-6cbc978cdc-b78vb" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-eth0" Jul 15 05:19:27.289998 containerd[1750]: 2025-07-15 05:19:27.277 [INFO][5079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" Namespace="calico-system" Pod="calico-kube-controllers-6cbc978cdc-b78vb" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-eth0" Jul 15 05:19:27.289998 containerd[1750]: 2025-07-15 05:19:27.277 [INFO][5079] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" Namespace="calico-system" Pod="calico-kube-controllers-6cbc978cdc-b78vb" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-eth0", GenerateName:"calico-kube-controllers-6cbc978cdc-", Namespace:"calico-system", SelfLink:"", UID:"bf63247a-25fe-4de6-99b2-376deebadade", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 19, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cbc978cdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-11ebebb5c9", ContainerID:"0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3", Pod:"calico-kube-controllers-6cbc978cdc-b78vb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b307db4964", MAC:"12:f4:fc:ff:05:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:19:27.289998 containerd[1750]: 2025-07-15 05:19:27.286 [INFO][5079] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" Namespace="calico-system" Pod="calico-kube-controllers-6cbc978cdc-b78vb" WorkloadEndpoint="ci--4396.0.0--n--11ebebb5c9-k8s-calico--kube--controllers--6cbc978cdc--b78vb-eth0" Jul 15 05:19:27.358036 containerd[1750]: time="2025-07-15T05:19:27.358003221Z" level=info msg="connecting to shim 0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3" address="unix:///run/containerd/s/7dc752ba527236c4820a48eedb4557f1d86a259fb3c5328814a65141f15c1b3c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:19:27.373853 systemd[1]: Started cri-containerd-0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3.scope - libcontainer container 0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3. Jul 15 05:19:27.408220 containerd[1750]: time="2025-07-15T05:19:27.408194140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cbc978cdc-b78vb,Uid:bf63247a-25fe-4de6-99b2-376deebadade,Namespace:calico-system,Attempt:0,} returns sandbox id \"0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3\"" Jul 15 05:19:27.424799 systemd-networkd[1374]: calic53553a8e4b: Gained IPv6LL Jul 15 05:19:27.592454 kubelet[3187]: I0715 05:19:27.592161 3187 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:19:27.936860 systemd-networkd[1374]: calia14daf82fe6: Gained IPv6LL Jul 15 05:19:28.086633 systemd-networkd[1374]: vxlan.calico: Link UP Jul 15 05:19:28.086684 systemd-networkd[1374]: vxlan.calico: Gained carrier Jul 15 05:19:28.704820 systemd-networkd[1374]: cali9674f640093: Gained IPv6LL Jul 15 05:19:29.154493 systemd-networkd[1374]: cali4b307db4964: Gained IPv6LL Jul 15 05:19:29.154693 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Jul 15 05:19:29.511024 containerd[1750]: time="2025-07-15T05:19:29.510992376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:29.513405 containerd[1750]: time="2025-07-15T05:19:29.513371386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 15 05:19:29.516937 containerd[1750]: time="2025-07-15T05:19:29.516897493Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:29.520800 containerd[1750]: time="2025-07-15T05:19:29.520773536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:29.521337 containerd[1750]: time="2025-07-15T05:19:29.521123275Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.777533031s" Jul 15 05:19:29.521337 containerd[1750]: time="2025-07-15T05:19:29.521147270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:19:29.522325 containerd[1750]: time="2025-07-15T05:19:29.522113067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 05:19:29.523096 containerd[1750]: time="2025-07-15T05:19:29.523070440Z" level=info msg="CreateContainer within sandbox \"bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:19:29.546556 containerd[1750]: time="2025-07-15T05:19:29.545898647Z" level=info msg="Container ece39b8579e6042b19b4f38e8ccd06051390e9fd1e7bf5dd72150c0c363cc092: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:29.559616 containerd[1750]: time="2025-07-15T05:19:29.559594311Z" level=info msg="CreateContainer within sandbox \"bfbd6bebe25fab1709f21c5ce4d174d7e1a2d4eb1b97f1dd772aac8853ca17ac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ece39b8579e6042b19b4f38e8ccd06051390e9fd1e7bf5dd72150c0c363cc092\"" Jul 15 05:19:29.559982 containerd[1750]: time="2025-07-15T05:19:29.559961870Z" level=info msg="StartContainer for \"ece39b8579e6042b19b4f38e8ccd06051390e9fd1e7bf5dd72150c0c363cc092\"" Jul 15 05:19:29.561188 containerd[1750]: time="2025-07-15T05:19:29.561157210Z" level=info msg="connecting to shim ece39b8579e6042b19b4f38e8ccd06051390e9fd1e7bf5dd72150c0c363cc092" address="unix:///run/containerd/s/db715c0654dd560b2c2cf9253f0f6b5d9e0ce9554d0139caf7fdbabd79a15ef2" protocol=ttrpc version=3 Jul 15 05:19:29.581883 systemd[1]: Started cri-containerd-ece39b8579e6042b19b4f38e8ccd06051390e9fd1e7bf5dd72150c0c363cc092.scope - libcontainer container ece39b8579e6042b19b4f38e8ccd06051390e9fd1e7bf5dd72150c0c363cc092. Jul 15 05:19:29.624494 containerd[1750]: time="2025-07-15T05:19:29.624461763Z" level=info msg="StartContainer for \"ece39b8579e6042b19b4f38e8ccd06051390e9fd1e7bf5dd72150c0c363cc092\" returns successfully" Jul 15 05:19:31.183953 kubelet[3187]: I0715 05:19:31.183928 3187 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:19:32.067824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921052967.mount: Deactivated successfully. Jul 15 05:19:32.657009 containerd[1750]: time="2025-07-15T05:19:32.656975680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:32.660511 containerd[1750]: time="2025-07-15T05:19:32.660479810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 15 05:19:32.663755 containerd[1750]: time="2025-07-15T05:19:32.663706805Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:32.667824 containerd[1750]: time="2025-07-15T05:19:32.667782132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:32.668251 containerd[1750]: time="2025-07-15T05:19:32.668155709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.146015394s" Jul 15 05:19:32.668251 containerd[1750]: time="2025-07-15T05:19:32.668180668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 05:19:32.669391 containerd[1750]: time="2025-07-15T05:19:32.669238733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:19:32.670190 containerd[1750]: time="2025-07-15T05:19:32.670161893Z" level=info msg="CreateContainer within sandbox \"a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 05:19:32.695190 containerd[1750]: time="2025-07-15T05:19:32.694432892Z" level=info msg="Container 9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:32.712048 containerd[1750]: time="2025-07-15T05:19:32.712025906Z" level=info msg="CreateContainer within sandbox \"a8bcc0aa8e80f2af4f27acd4e24a2dab9a9dc1ecf56d0e07a44e081d662b61c7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d\"" Jul 15 05:19:32.712401 containerd[1750]: time="2025-07-15T05:19:32.712346702Z" level=info msg="StartContainer for \"9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d\"" Jul 15 05:19:32.713479 containerd[1750]: time="2025-07-15T05:19:32.713447685Z" level=info msg="connecting to shim 9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d" address="unix:///run/containerd/s/c10a33f8e4fc05e441b4ad5adcb1bdfba42d80eb411a207a35ecf2bb823fab77" protocol=ttrpc version=3 Jul 15 05:19:32.731871 systemd[1]: Started cri-containerd-9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d.scope - libcontainer container 9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d. Jul 15 05:19:32.773128 containerd[1750]: time="2025-07-15T05:19:32.773106127Z" level=info msg="StartContainer for \"9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d\" returns successfully" Jul 15 05:19:32.992201 containerd[1750]: time="2025-07-15T05:19:32.991773111Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:32.994064 containerd[1750]: time="2025-07-15T05:19:32.994042550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 05:19:32.995434 containerd[1750]: time="2025-07-15T05:19:32.995400067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 326.124809ms" Jul 15 05:19:32.995503 containerd[1750]: time="2025-07-15T05:19:32.995436811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:19:32.996132 containerd[1750]: time="2025-07-15T05:19:32.996115766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 05:19:32.997196 containerd[1750]: time="2025-07-15T05:19:32.997160194Z" level=info msg="CreateContainer within sandbox \"af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:19:33.016392 containerd[1750]: time="2025-07-15T05:19:33.016349215Z" level=info msg="Container 0391cea356f1af88088a5ca31366cf90ed1919c9f6d307c97f1f7a0e2a72d131: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:33.037523 containerd[1750]: time="2025-07-15T05:19:33.037497522Z" level=info msg="CreateContainer within sandbox \"af6d7e624a2b1768b251cfba14f222f95c85b866390fd7117ce8184b536adef0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0391cea356f1af88088a5ca31366cf90ed1919c9f6d307c97f1f7a0e2a72d131\"" Jul 15 05:19:33.038025 containerd[1750]: time="2025-07-15T05:19:33.037818234Z" level=info msg="StartContainer for \"0391cea356f1af88088a5ca31366cf90ed1919c9f6d307c97f1f7a0e2a72d131\"" Jul 15 05:19:33.038765 containerd[1750]: time="2025-07-15T05:19:33.038726068Z" level=info msg="connecting to shim 0391cea356f1af88088a5ca31366cf90ed1919c9f6d307c97f1f7a0e2a72d131" address="unix:///run/containerd/s/05140cec7cc71d7f99f4011df33d4338dd67e7871ddf254fd33cdce69aa312c4" protocol=ttrpc version=3 Jul 15 05:19:33.057860 systemd[1]: Started cri-containerd-0391cea356f1af88088a5ca31366cf90ed1919c9f6d307c97f1f7a0e2a72d131.scope - libcontainer container 0391cea356f1af88088a5ca31366cf90ed1919c9f6d307c97f1f7a0e2a72d131. Jul 15 05:19:33.096611 containerd[1750]: time="2025-07-15T05:19:33.096560597Z" level=info msg="StartContainer for \"0391cea356f1af88088a5ca31366cf90ed1919c9f6d307c97f1f7a0e2a72d131\" returns successfully" Jul 15 05:19:33.207489 kubelet[3187]: I0715 05:19:33.207434 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b745599d4-xljsz" podStartSLOduration=30.976459204 podStartE2EDuration="35.207417275s" podCreationTimestamp="2025-07-15 05:18:58 +0000 UTC" firstStartedPulling="2025-07-15 05:19:25.290786719 +0000 UTC m=+41.325134976" lastFinishedPulling="2025-07-15 05:19:29.521744789 +0000 UTC m=+45.556093047" observedRunningTime="2025-07-15 05:19:30.195324388 +0000 UTC m=+46.229672655" watchObservedRunningTime="2025-07-15 05:19:33.207417275 +0000 UTC m=+49.241765537" Jul 15 05:19:33.209321 kubelet[3187]: I0715 05:19:33.209289 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-5t4hj" podStartSLOduration=27.089889811 podStartE2EDuration="34.209277124s" podCreationTimestamp="2025-07-15 05:18:59 +0000 UTC" firstStartedPulling="2025-07-15 05:19:25.54939624 +0000 UTC m=+41.583744491" lastFinishedPulling="2025-07-15 05:19:32.668783549 +0000 UTC m=+48.703131804" observedRunningTime="2025-07-15 05:19:33.206900865 +0000 UTC m=+49.241249126" watchObservedRunningTime="2025-07-15 05:19:33.209277124 +0000 UTC m=+49.243625376" Jul 15 05:19:33.282664 containerd[1750]: time="2025-07-15T05:19:33.282600614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d\" id:\"69e8bfca1c3a1607e89b81f4f8c981d686f237624e7bca5fd093ce4997d61b5f\" pid:5490 exit_status:1 exited_at:{seconds:1752556773 nanos:282339003}" Jul 15 05:19:34.288711 containerd[1750]: time="2025-07-15T05:19:34.288674222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d\" id:\"5d78234f3119821668a51660a894c3fe9ecf05a677a773a16b79b8c76de9e761\" pid:5529 exit_status:1 exited_at:{seconds:1752556774 nanos:288319089}" Jul 15 05:19:34.471636 kubelet[3187]: I0715 05:19:34.471139 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b745599d4-6rf49" podStartSLOduration=30.994617691 podStartE2EDuration="37.471123008s" podCreationTimestamp="2025-07-15 05:18:57 +0000 UTC" firstStartedPulling="2025-07-15 05:19:26.519495521 +0000 UTC m=+42.553843778" lastFinishedPulling="2025-07-15 05:19:32.996000842 +0000 UTC m=+49.030349095" observedRunningTime="2025-07-15 05:19:33.22982993 +0000 UTC m=+49.264178187" watchObservedRunningTime="2025-07-15 05:19:34.471123008 +0000 UTC m=+50.505471266" Jul 15 05:19:35.256587 containerd[1750]: time="2025-07-15T05:19:35.256543200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d\" id:\"7c78849d0b40fa27d5a9f6a7b2cdf6ad92a5b9da4f55b0d84bb398a56a520f9c\" pid:5558 exit_status:1 exited_at:{seconds:1752556775 nanos:256328403}" Jul 15 05:19:39.301091 containerd[1750]: time="2025-07-15T05:19:39.301039148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:39.696303 containerd[1750]: time="2025-07-15T05:19:39.696265328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 15 05:19:39.997210 containerd[1750]: time="2025-07-15T05:19:39.996985501Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:41.046666 containerd[1750]: time="2025-07-15T05:19:41.046309975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:41.106984 containerd[1750]: time="2025-07-15T05:19:41.047172490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 8.05103341s" Jul 15 05:19:41.106984 containerd[1750]: time="2025-07-15T05:19:41.047196896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 05:19:41.106984 containerd[1750]: time="2025-07-15T05:19:41.048088293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 05:19:41.106984 containerd[1750]: time="2025-07-15T05:19:41.049466030Z" level=info msg="CreateContainer within sandbox \"ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 05:19:43.250756 containerd[1750]: time="2025-07-15T05:19:43.247669014Z" level=info msg="Container 1bbc0268d831cbbb377b24b37d06d1dd2d2162781991b161b456cf463991f4fa: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:43.348972 containerd[1750]: time="2025-07-15T05:19:43.348938121Z" level=info msg="CreateContainer within sandbox \"ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1bbc0268d831cbbb377b24b37d06d1dd2d2162781991b161b456cf463991f4fa\"" Jul 15 05:19:43.349290 containerd[1750]: time="2025-07-15T05:19:43.349254387Z" level=info msg="StartContainer for \"1bbc0268d831cbbb377b24b37d06d1dd2d2162781991b161b456cf463991f4fa\"" Jul 15 05:19:43.350828 containerd[1750]: time="2025-07-15T05:19:43.350793654Z" level=info msg="connecting to shim 1bbc0268d831cbbb377b24b37d06d1dd2d2162781991b161b456cf463991f4fa" address="unix:///run/containerd/s/32fef484e37c10dacf890ffed429ea97f811220a536fcf8c47975b3cffed7147" protocol=ttrpc version=3 Jul 15 05:19:43.370898 systemd[1]: Started cri-containerd-1bbc0268d831cbbb377b24b37d06d1dd2d2162781991b161b456cf463991f4fa.scope - libcontainer container 1bbc0268d831cbbb377b24b37d06d1dd2d2162781991b161b456cf463991f4fa. Jul 15 05:19:43.411566 containerd[1750]: time="2025-07-15T05:19:43.411549263Z" level=info msg="StartContainer for \"1bbc0268d831cbbb377b24b37d06d1dd2d2162781991b161b456cf463991f4fa\" returns successfully" Jul 15 05:19:46.957493 containerd[1750]: time="2025-07-15T05:19:46.957454430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:46.960160 containerd[1750]: time="2025-07-15T05:19:46.960138934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 15 05:19:46.962880 containerd[1750]: time="2025-07-15T05:19:46.962853382Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:46.967255 containerd[1750]: time="2025-07-15T05:19:46.967200991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:46.967748 containerd[1750]: time="2025-07-15T05:19:46.967507995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.919394883s" Jul 15 05:19:46.967748 containerd[1750]: time="2025-07-15T05:19:46.967534712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 05:19:46.968258 containerd[1750]: time="2025-07-15T05:19:46.968240480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 05:19:46.977185 containerd[1750]: time="2025-07-15T05:19:46.977163240Z" level=info msg="CreateContainer within sandbox \"0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 05:19:47.003932 containerd[1750]: time="2025-07-15T05:19:47.002825962Z" level=info msg="Container 80a805f0f6b5b8ec92555f10e2d15c8cb51e04aa5a6d800db977c0206bc005cd: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:47.026555 containerd[1750]: time="2025-07-15T05:19:47.026531439Z" level=info msg="CreateContainer within sandbox \"0df2f519ea6c79953ce82842481c87ddaf8c80e29ca592ab505ec8235981c5e3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"80a805f0f6b5b8ec92555f10e2d15c8cb51e04aa5a6d800db977c0206bc005cd\"" Jul 15 05:19:47.026925 containerd[1750]: time="2025-07-15T05:19:47.026855768Z" level=info msg="StartContainer for \"80a805f0f6b5b8ec92555f10e2d15c8cb51e04aa5a6d800db977c0206bc005cd\"" Jul 15 05:19:47.027896 containerd[1750]: time="2025-07-15T05:19:47.027865053Z" level=info msg="connecting to shim 80a805f0f6b5b8ec92555f10e2d15c8cb51e04aa5a6d800db977c0206bc005cd" address="unix:///run/containerd/s/7dc752ba527236c4820a48eedb4557f1d86a259fb3c5328814a65141f15c1b3c" protocol=ttrpc version=3 Jul 15 05:19:47.048888 systemd[1]: Started cri-containerd-80a805f0f6b5b8ec92555f10e2d15c8cb51e04aa5a6d800db977c0206bc005cd.scope - libcontainer container 80a805f0f6b5b8ec92555f10e2d15c8cb51e04aa5a6d800db977c0206bc005cd. Jul 15 05:19:47.088723 containerd[1750]: time="2025-07-15T05:19:47.088696113Z" level=info msg="StartContainer for \"80a805f0f6b5b8ec92555f10e2d15c8cb51e04aa5a6d800db977c0206bc005cd\" returns successfully" Jul 15 05:19:47.254704 containerd[1750]: time="2025-07-15T05:19:47.254606900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80a805f0f6b5b8ec92555f10e2d15c8cb51e04aa5a6d800db977c0206bc005cd\" id:\"ac9a686b839e7c167ab058e1a747235353eb6e289e5fd3293f273bcd4b19e625\" pid:5667 exited_at:{seconds:1752556787 nanos:254243464}" Jul 15 05:19:47.267691 kubelet[3187]: I0715 05:19:47.267569 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6cbc978cdc-b78vb" podStartSLOduration=27.708400129 podStartE2EDuration="47.267552737s" podCreationTimestamp="2025-07-15 05:19:00 +0000 UTC" firstStartedPulling="2025-07-15 05:19:27.408990248 +0000 UTC m=+43.443338508" lastFinishedPulling="2025-07-15 05:19:46.968142855 +0000 UTC m=+63.002491116" observedRunningTime="2025-07-15 05:19:47.236185881 +0000 UTC m=+63.270534140" watchObservedRunningTime="2025-07-15 05:19:47.267552737 +0000 UTC m=+63.301900996" Jul 15 05:19:48.843692 containerd[1750]: time="2025-07-15T05:19:48.843653276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:48.846250 containerd[1750]: time="2025-07-15T05:19:48.846079669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 15 05:19:48.907996 containerd[1750]: time="2025-07-15T05:19:48.907973347Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:49.001546 containerd[1750]: time="2025-07-15T05:19:49.001487630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:19:49.002054 containerd[1750]: time="2025-07-15T05:19:49.001991941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.033375483s" Jul 15 05:19:49.002054 containerd[1750]: time="2025-07-15T05:19:49.002016573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 05:19:49.003713 containerd[1750]: time="2025-07-15T05:19:49.003669467Z" level=info msg="CreateContainer within sandbox \"ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 05:19:49.210935 containerd[1750]: time="2025-07-15T05:19:49.210881088Z" level=info msg="Container e6c0edabfc3cc45d213cc540c8c7e318dbcea01eaa559b56775e7a05d49ac13d: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:19:49.306895 containerd[1750]: time="2025-07-15T05:19:49.306871535Z" level=info msg="CreateContainer within sandbox \"ea63237fe5b02211558c5342284fce9a09eadf6fd36961c9d6f197ef1f3e2fcb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e6c0edabfc3cc45d213cc540c8c7e318dbcea01eaa559b56775e7a05d49ac13d\"" Jul 15 05:19:49.307204 containerd[1750]: time="2025-07-15T05:19:49.307179089Z" level=info msg="StartContainer for \"e6c0edabfc3cc45d213cc540c8c7e318dbcea01eaa559b56775e7a05d49ac13d\"" Jul 15 05:19:49.308589 containerd[1750]: time="2025-07-15T05:19:49.308553716Z" level=info msg="connecting to shim e6c0edabfc3cc45d213cc540c8c7e318dbcea01eaa559b56775e7a05d49ac13d" address="unix:///run/containerd/s/32fef484e37c10dacf890ffed429ea97f811220a536fcf8c47975b3cffed7147" protocol=ttrpc version=3 Jul 15 05:19:49.327857 systemd[1]: Started cri-containerd-e6c0edabfc3cc45d213cc540c8c7e318dbcea01eaa559b56775e7a05d49ac13d.scope - libcontainer container e6c0edabfc3cc45d213cc540c8c7e318dbcea01eaa559b56775e7a05d49ac13d. Jul 15 05:19:49.356689 containerd[1750]: time="2025-07-15T05:19:49.356600570Z" level=info msg="StartContainer for \"e6c0edabfc3cc45d213cc540c8c7e318dbcea01eaa559b56775e7a05d49ac13d\" returns successfully" Jul 15 05:19:50.132164 kubelet[3187]: I0715 05:19:50.132143 3187 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 05:19:50.132164 kubelet[3187]: I0715 05:19:50.132168 3187 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 05:19:50.239836 kubelet[3187]: I0715 05:19:50.239790 3187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ltwxk" podStartSLOduration=28.498330039 podStartE2EDuration="50.239771738s" podCreationTimestamp="2025-07-15 05:19:00 +0000 UTC" firstStartedPulling="2025-07-15 05:19:27.261171257 +0000 UTC m=+43.295519508" lastFinishedPulling="2025-07-15 05:19:49.00261295 +0000 UTC m=+65.036961207" observedRunningTime="2025-07-15 05:19:50.238719935 +0000 UTC m=+66.273068192" watchObservedRunningTime="2025-07-15 05:19:50.239771738 +0000 UTC m=+66.274119990" Jul 15 05:19:52.203834 containerd[1750]: time="2025-07-15T05:19:52.203781253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e70f1cb251ed89aff761ec089f50b6ad0979c186c502d3e9ec1000ac6e13d780\" id:\"3112820054fc68fb26edd32a2e447d8805b1edb05cb850c64c6bf57cf2c8ffb9\" pid:5732 exited_at:{seconds:1752556792 nanos:203500930}" Jul 15 05:20:04.049214 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.16.10:35486.service - OpenSSH per-connection server daemon (10.200.16.10:35486). Jul 15 05:20:04.678485 sshd[5751]: Accepted publickey for core from 10.200.16.10 port 35486 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:04.679510 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:04.683422 systemd-logind[1724]: New session 10 of user core. Jul 15 05:20:04.687899 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 05:20:04.693217 kubelet[3187]: I0715 05:20:04.693148 3187 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:20:05.197259 sshd[5754]: Connection closed by 10.200.16.10 port 35486 Jul 15 05:20:05.198065 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:05.204258 systemd[1]: sshd@7-10.200.8.39:22-10.200.16.10:35486.service: Deactivated successfully. Jul 15 05:20:05.208633 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 05:20:05.214068 systemd-logind[1724]: Session 10 logged out. Waiting for processes to exit. Jul 15 05:20:05.217011 systemd-logind[1724]: Removed session 10. Jul 15 05:20:05.285520 containerd[1750]: time="2025-07-15T05:20:05.285462741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d\" id:\"fe476c69929a6cc3df054bc9d873f8bcc49a38c710d5f5ae39b815a4a29a5be5\" pid:5779 exited_at:{seconds:1752556805 nanos:285122845}" Jul 15 05:20:10.309073 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.16.10:43440.service - OpenSSH per-connection server daemon (10.200.16.10:43440). Jul 15 05:20:10.940321 sshd[5802]: Accepted publickey for core from 10.200.16.10 port 43440 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:10.941314 sshd-session[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:10.945371 systemd-logind[1724]: New session 11 of user core. Jul 15 05:20:10.947870 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 05:20:11.430146 sshd[5805]: Connection closed by 10.200.16.10 port 43440 Jul 15 05:20:11.430083 sshd-session[5802]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:11.434927 systemd[1]: sshd@8-10.200.8.39:22-10.200.16.10:43440.service: Deactivated successfully. Jul 15 05:20:11.437641 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 05:20:11.439564 systemd-logind[1724]: Session 11 logged out. Waiting for processes to exit. Jul 15 05:20:11.442172 systemd-logind[1724]: Removed session 11. Jul 15 05:20:16.545945 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.16.10:43444.service - OpenSSH per-connection server daemon (10.200.16.10:43444). Jul 15 05:20:17.191117 sshd[5818]: Accepted publickey for core from 10.200.16.10 port 43444 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:17.192110 sshd-session[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:17.195993 systemd-logind[1724]: New session 12 of user core. Jul 15 05:20:17.200893 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 05:20:17.253859 containerd[1750]: time="2025-07-15T05:20:17.253822780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80a805f0f6b5b8ec92555f10e2d15c8cb51e04aa5a6d800db977c0206bc005cd\" id:\"9b3d8125e618b06ffa6366d9cb98638edadca2adc242ce31e741344182ab06c0\" pid:5834 exited_at:{seconds:1752556817 nanos:253622979}" Jul 15 05:20:17.679272 sshd[5821]: Connection closed by 10.200.16.10 port 43444 Jul 15 05:20:17.679623 sshd-session[5818]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:17.682272 systemd[1]: sshd@9-10.200.8.39:22-10.200.16.10:43444.service: Deactivated successfully. Jul 15 05:20:17.684007 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 05:20:17.685253 systemd-logind[1724]: Session 12 logged out. Waiting for processes to exit. Jul 15 05:20:17.686123 systemd-logind[1724]: Removed session 12. Jul 15 05:20:17.789284 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.16.10:43456.service - OpenSSH per-connection server daemon (10.200.16.10:43456). Jul 15 05:20:18.411111 sshd[5855]: Accepted publickey for core from 10.200.16.10 port 43456 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:18.412346 sshd-session[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:18.416555 systemd-logind[1724]: New session 13 of user core. Jul 15 05:20:18.422020 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 05:20:18.922894 sshd[5858]: Connection closed by 10.200.16.10 port 43456 Jul 15 05:20:18.923330 sshd-session[5855]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:18.926328 systemd[1]: sshd@10-10.200.8.39:22-10.200.16.10:43456.service: Deactivated successfully. Jul 15 05:20:18.928186 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 05:20:18.928925 systemd-logind[1724]: Session 13 logged out. Waiting for processes to exit. Jul 15 05:20:18.929786 systemd-logind[1724]: Removed session 13. Jul 15 05:20:19.030441 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.16.10:43470.service - OpenSSH per-connection server daemon (10.200.16.10:43470). Jul 15 05:20:19.651239 sshd[5867]: Accepted publickey for core from 10.200.16.10 port 43470 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:19.652353 sshd-session[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:19.656305 systemd-logind[1724]: New session 14 of user core. Jul 15 05:20:19.660882 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 05:20:20.169903 sshd[5873]: Connection closed by 10.200.16.10 port 43470 Jul 15 05:20:20.171648 sshd-session[5867]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:20.175228 systemd[1]: sshd@11-10.200.8.39:22-10.200.16.10:43470.service: Deactivated successfully. Jul 15 05:20:20.178399 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 05:20:20.180312 systemd-logind[1724]: Session 14 logged out. Waiting for processes to exit. Jul 15 05:20:20.181458 systemd-logind[1724]: Removed session 14. Jul 15 05:20:21.660395 containerd[1750]: time="2025-07-15T05:20:21.660343844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d\" id:\"5d47d9c090a4c3af46f18b727ae83b0eedcc312fdec4deae551a3e4734124d22\" pid:5897 exited_at:{seconds:1752556821 nanos:659781065}" Jul 15 05:20:22.204351 containerd[1750]: time="2025-07-15T05:20:22.204313905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e70f1cb251ed89aff761ec089f50b6ad0979c186c502d3e9ec1000ac6e13d780\" id:\"0aa0e82e082df2fb560b67e43e64cf64fd965fb7d8bed392fb0dda1aded3512b\" pid:5919 exited_at:{seconds:1752556822 nanos:204080855}" Jul 15 05:20:25.284604 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.16.10:55656.service - OpenSSH per-connection server daemon (10.200.16.10:55656). Jul 15 05:20:25.909643 sshd[5936]: Accepted publickey for core from 10.200.16.10 port 55656 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:25.911273 sshd-session[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:25.917533 systemd-logind[1724]: New session 15 of user core. Jul 15 05:20:25.922083 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 05:20:26.417757 sshd[5939]: Connection closed by 10.200.16.10 port 55656 Jul 15 05:20:26.419881 sshd-session[5936]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:26.423377 systemd-logind[1724]: Session 15 logged out. Waiting for processes to exit. Jul 15 05:20:26.424174 systemd[1]: sshd@12-10.200.8.39:22-10.200.16.10:55656.service: Deactivated successfully. Jul 15 05:20:26.427254 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 05:20:26.431562 systemd-logind[1724]: Removed session 15. Jul 15 05:20:28.656263 containerd[1750]: time="2025-07-15T05:20:28.656191539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80a805f0f6b5b8ec92555f10e2d15c8cb51e04aa5a6d800db977c0206bc005cd\" id:\"f8f3701f5e64e7b3b8c8174d36bb526d7d6cc07f167065aba10c3f051fc26477\" pid:5964 exited_at:{seconds:1752556828 nanos:655987965}" Jul 15 05:20:31.530856 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.16.10:42518.service - OpenSSH per-connection server daemon (10.200.16.10:42518). Jul 15 05:20:32.156223 sshd[5974]: Accepted publickey for core from 10.200.16.10 port 42518 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:32.157247 sshd-session[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:32.162856 systemd-logind[1724]: New session 16 of user core. Jul 15 05:20:32.165880 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 05:20:32.660696 sshd[5977]: Connection closed by 10.200.16.10 port 42518 Jul 15 05:20:32.661902 sshd-session[5974]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:32.665607 systemd-logind[1724]: Session 16 logged out. Waiting for processes to exit. Jul 15 05:20:32.667108 systemd[1]: sshd@13-10.200.8.39:22-10.200.16.10:42518.service: Deactivated successfully. Jul 15 05:20:32.669391 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 05:20:32.671485 systemd-logind[1724]: Removed session 16. Jul 15 05:20:35.285589 containerd[1750]: time="2025-07-15T05:20:35.285457471Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d\" id:\"75f022065a41558f6a3225d15cb399a706a3069cc1c0c7185186fca66014ba03\" pid:6001 exited_at:{seconds:1752556835 nanos:285196801}" Jul 15 05:20:37.775956 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.16.10:42520.service - OpenSSH per-connection server daemon (10.200.16.10:42520). Jul 15 05:20:38.401345 sshd[6012]: Accepted publickey for core from 10.200.16.10 port 42520 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:38.402398 sshd-session[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:38.406494 systemd-logind[1724]: New session 17 of user core. Jul 15 05:20:38.410881 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 05:20:38.888914 sshd[6015]: Connection closed by 10.200.16.10 port 42520 Jul 15 05:20:38.890320 sshd-session[6012]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:38.893093 systemd[1]: sshd@14-10.200.8.39:22-10.200.16.10:42520.service: Deactivated successfully. Jul 15 05:20:38.896525 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 05:20:38.898420 systemd-logind[1724]: Session 17 logged out. Waiting for processes to exit. Jul 15 05:20:38.900467 systemd-logind[1724]: Removed session 17. Jul 15 05:20:39.000938 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.16.10:42528.service - OpenSSH per-connection server daemon (10.200.16.10:42528). Jul 15 05:20:39.642100 sshd[6027]: Accepted publickey for core from 10.200.16.10 port 42528 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:39.643077 sshd-session[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:39.646809 systemd-logind[1724]: New session 18 of user core. Jul 15 05:20:39.651889 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 05:20:40.182578 sshd[6030]: Connection closed by 10.200.16.10 port 42528 Jul 15 05:20:40.183002 sshd-session[6027]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:40.185624 systemd[1]: sshd@15-10.200.8.39:22-10.200.16.10:42528.service: Deactivated successfully. Jul 15 05:20:40.187256 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 05:20:40.188366 systemd-logind[1724]: Session 18 logged out. Waiting for processes to exit. Jul 15 05:20:40.189587 systemd-logind[1724]: Removed session 18. Jul 15 05:20:40.293282 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.16.10:35320.service - OpenSSH per-connection server daemon (10.200.16.10:35320). Jul 15 05:20:40.924050 sshd[6040]: Accepted publickey for core from 10.200.16.10 port 35320 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:40.925320 sshd-session[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:40.930470 systemd-logind[1724]: New session 19 of user core. Jul 15 05:20:40.936875 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 05:20:41.919064 sshd[6043]: Connection closed by 10.200.16.10 port 35320 Jul 15 05:20:41.919575 sshd-session[6040]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:41.923692 systemd-logind[1724]: Session 19 logged out. Waiting for processes to exit. Jul 15 05:20:41.924182 systemd[1]: sshd@16-10.200.8.39:22-10.200.16.10:35320.service: Deactivated successfully. Jul 15 05:20:41.925901 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 05:20:41.927307 systemd-logind[1724]: Removed session 19. Jul 15 05:20:42.032455 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.16.10:35330.service - OpenSSH per-connection server daemon (10.200.16.10:35330). Jul 15 05:20:42.658755 sshd[6060]: Accepted publickey for core from 10.200.16.10 port 35330 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:42.659779 sshd-session[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:42.666093 systemd-logind[1724]: New session 20 of user core. Jul 15 05:20:42.668979 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 05:20:43.229599 sshd[6063]: Connection closed by 10.200.16.10 port 35330 Jul 15 05:20:43.230069 sshd-session[6060]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:43.233235 systemd[1]: sshd@17-10.200.8.39:22-10.200.16.10:35330.service: Deactivated successfully. Jul 15 05:20:43.235088 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 05:20:43.235768 systemd-logind[1724]: Session 20 logged out. Waiting for processes to exit. Jul 15 05:20:43.236891 systemd-logind[1724]: Removed session 20. Jul 15 05:20:43.341179 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.16.10:35340.service - OpenSSH per-connection server daemon (10.200.16.10:35340). Jul 15 05:20:43.962648 sshd[6073]: Accepted publickey for core from 10.200.16.10 port 35340 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:43.962967 sshd-session[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:43.969367 systemd-logind[1724]: New session 21 of user core. Jul 15 05:20:43.977371 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 05:20:44.443108 sshd[6076]: Connection closed by 10.200.16.10 port 35340 Jul 15 05:20:44.443596 sshd-session[6073]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:44.446192 systemd[1]: sshd@18-10.200.8.39:22-10.200.16.10:35340.service: Deactivated successfully. Jul 15 05:20:44.447705 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 05:20:44.448955 systemd-logind[1724]: Session 21 logged out. Waiting for processes to exit. Jul 15 05:20:44.450216 systemd-logind[1724]: Removed session 21. Jul 15 05:20:47.283614 containerd[1750]: time="2025-07-15T05:20:47.283568125Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80a805f0f6b5b8ec92555f10e2d15c8cb51e04aa5a6d800db977c0206bc005cd\" id:\"47d6dabfaa5f2e261cd76132f283d9c6b3be0b7ba4e0ca6032fc63c9dffb5d09\" pid:6102 exited_at:{seconds:1752556847 nanos:283218940}" Jul 15 05:20:49.566485 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.16.10:35344.service - OpenSSH per-connection server daemon (10.200.16.10:35344). Jul 15 05:20:50.193347 sshd[6123]: Accepted publickey for core from 10.200.16.10 port 35344 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:50.194355 sshd-session[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:50.198378 systemd-logind[1724]: New session 22 of user core. Jul 15 05:20:50.202893 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 05:20:50.695250 sshd[6126]: Connection closed by 10.200.16.10 port 35344 Jul 15 05:20:50.694819 sshd-session[6123]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:50.697366 systemd-logind[1724]: Session 22 logged out. Waiting for processes to exit. Jul 15 05:20:50.697848 systemd[1]: sshd@19-10.200.8.39:22-10.200.16.10:35344.service: Deactivated successfully. Jul 15 05:20:50.699369 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 05:20:50.701097 systemd-logind[1724]: Removed session 22. Jul 15 05:20:52.227019 containerd[1750]: time="2025-07-15T05:20:52.226979902Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e70f1cb251ed89aff761ec089f50b6ad0979c186c502d3e9ec1000ac6e13d780\" id:\"10349fc258f58850b84fb641e6579211ac94e7d729852134e8b1224de6183f46\" pid:6150 exited_at:{seconds:1752556852 nanos:226581403}" Jul 15 05:20:55.805731 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.16.10:48160.service - OpenSSH per-connection server daemon (10.200.16.10:48160). Jul 15 05:20:56.435726 sshd[6165]: Accepted publickey for core from 10.200.16.10 port 48160 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:20:56.437216 sshd-session[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:56.446303 systemd-logind[1724]: New session 23 of user core. Jul 15 05:20:56.452833 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 05:20:56.986229 sshd[6168]: Connection closed by 10.200.16.10 port 48160 Jul 15 05:20:56.987409 sshd-session[6165]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:56.990877 systemd-logind[1724]: Session 23 logged out. Waiting for processes to exit. Jul 15 05:20:56.992493 systemd[1]: sshd@20-10.200.8.39:22-10.200.16.10:48160.service: Deactivated successfully. Jul 15 05:20:56.995496 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 05:20:56.998037 systemd-logind[1724]: Removed session 23. Jul 15 05:21:02.098470 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.16.10:35538.service - OpenSSH per-connection server daemon (10.200.16.10:35538). Jul 15 05:21:02.723849 sshd[6201]: Accepted publickey for core from 10.200.16.10 port 35538 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:21:02.724854 sshd-session[6201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:02.728576 systemd-logind[1724]: New session 24 of user core. Jul 15 05:21:02.732873 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 05:21:03.234694 sshd[6204]: Connection closed by 10.200.16.10 port 35538 Jul 15 05:21:03.236606 sshd-session[6201]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:03.240065 systemd-logind[1724]: Session 24 logged out. Waiting for processes to exit. Jul 15 05:21:03.241050 systemd[1]: sshd@21-10.200.8.39:22-10.200.16.10:35538.service: Deactivated successfully. Jul 15 05:21:03.244257 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 05:21:03.248839 systemd-logind[1724]: Removed session 24. Jul 15 05:21:05.258524 containerd[1750]: time="2025-07-15T05:21:05.258464868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f696e7b18c748577176d18f81b1d0e226b844781e1598d358aa3fe59d28ba2d\" id:\"3f4e7529709c39492710879a9acde112145723f33209e348aa004cb756cecf03\" pid:6227 exited_at:{seconds:1752556865 nanos:258127862}" Jul 15 05:21:08.349619 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.16.10:35552.service - OpenSSH per-connection server daemon (10.200.16.10:35552). Jul 15 05:21:08.980298 sshd[6238]: Accepted publickey for core from 10.200.16.10 port 35552 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:21:08.981312 sshd-session[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:08.985238 systemd-logind[1724]: New session 25 of user core. Jul 15 05:21:08.996885 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 05:21:09.484577 sshd[6241]: Connection closed by 10.200.16.10 port 35552 Jul 15 05:21:09.485070 sshd-session[6238]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:09.488067 systemd-logind[1724]: Session 25 logged out. Waiting for processes to exit. Jul 15 05:21:09.489624 systemd[1]: sshd@22-10.200.8.39:22-10.200.16.10:35552.service: Deactivated successfully. Jul 15 05:21:09.492101 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 05:21:09.495057 systemd-logind[1724]: Removed session 25. Jul 15 05:21:14.600249 systemd[1]: Started sshd@23-10.200.8.39:22-10.200.16.10:44080.service - OpenSSH per-connection server daemon (10.200.16.10:44080). Jul 15 05:21:15.234481 sshd[6253]: Accepted publickey for core from 10.200.16.10 port 44080 ssh2: RSA SHA256:WldnkDLUmxPu3hAw0Hy9/rmNIFGGN2Id/cNmLPJWvxw Jul 15 05:21:15.235537 sshd-session[6253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:15.243130 systemd-logind[1724]: New session 26 of user core. Jul 15 05:21:15.246866 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 05:21:15.721923 sshd[6256]: Connection closed by 10.200.16.10 port 44080 Jul 15 05:21:15.722298 sshd-session[6253]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:15.726079 systemd[1]: sshd@23-10.200.8.39:22-10.200.16.10:44080.service: Deactivated successfully. Jul 15 05:21:15.728216 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 05:21:15.729841 systemd-logind[1724]: Session 26 logged out. Waiting for processes to exit. Jul 15 05:21:15.731687 systemd-logind[1724]: Removed session 26. Jul 15 05:21:17.258215 containerd[1750]: time="2025-07-15T05:21:17.258129906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80a805f0f6b5b8ec92555f10e2d15c8cb51e04aa5a6d800db977c0206bc005cd\" id:\"4215d8b574422a7045425db08a623614795662d05865aa7f554b1af46fa0f70a\" pid:6279 exited_at:{seconds:1752556877 nanos:257899686}"